Professional Documents
Culture Documents
Abstract The current push in industry is focused on en- ance and quality when optimizing for six sigma reliability
suring not only that a product performs as desired but and robustness.
also that the product consistently performs as desired. To
ensure consistency in product performance, “quality” is Key words six sigma, sigma level, robustness, reliabil-
measured, improved, and controlled. Most quality initia- ity, probability of failure
tives have originated and been implemented in the prod-
uct manufacturing stages. More recently, however, it has
been observed that much of a product’s performance and
quality is determined by early design decisions, by the
design choices made early in the product design cycle. 1
Consequently, quality pushes have made their way into Introduction
the design cycle, and “design for quality” is the primary
objective. How is this objective measured and met? Most real-world engineering problems involve at least an
The most recent quality philosophy, also originating element of uncertainty (if not substantial uncertainty) –
in a manufacturing setting, is six sigma. The concepts uncertainty in loading conditions, in material characteris-
of six sigma quality can be defined in an engineering tics, in analysis/simulation model accuracy, in geometric
design context through relation to the concepts of de- properties, in manufacturing precision, in actual prod-
sign reliability and robustness – probabilistic design ap- uct usage, etc. Many optimization strategies, however, do
proaches. Within this context, design quality is meas- not incorporate this uncertainty into a mathematical for-
ured with respect to probability of constraint satisfac- mulation. Optimization algorithms tend to push a design
tion and sensitivity of performance objectives, both of toward one or more constraints until the constraints are
which can be related to a design “sigma level”. In this pa- active, leaving the designer with a “high-risk” design, one
per, we define six sigma in an engineering design context for which even slight uncertainties in the problem for-
and present an implementation of design for six sigma mulation or changes in the operating environment could
– a robust optimization formulation that incorporates produce failed or unsafe designs. In addition, optimiza-
approaches from structural reliability and robust design tion algorithms tend to search for “peak” solutions, ones
with the concepts and philosophy of six sigma. This for- for which even slight changes in design variables and un-
mulation is demonstrated using a complex automotive controllable, uncertain parameters can result in substan-
application: vehicle side impact crash simulation. Re- tial performance degradation. In this case the “optimal”
sults presented illustrate the tradeoff between perform- performance is misleading: worst-case performance could
potentially be much less than desirable and failed designs
could occur.
Received: 30 August 2001 Traditionally, engineering problems have been formu-
Revised manuscript received: 27 September 2002 lated to handle uncertainty only through crude safety fac-
Published online: 16 January 2004
tor methods that often lead to overdesigned products and
Springer-Verlag 2004
do not offer insight into the effects of individual uncer-
P.N. Koch1, u , R.-J. Yang2 and L. Gu2 tainties and the actual margin of safety. In recent years,
1 however, probabilistic design analysis and optimization
Engineous Software, Inc., 2000 CentreGreen Way, Suite 100,
Cary, NC 27513, Tel.: 919-677-6700x262 methods, often also called quality engineering methods,
e-mail: patrick.koch@engineous.com have been developed to address uncertainty and random-
2 ness through statistical modeling and probabilistic an-
Ford Motor Company, Vehicle Safety Research Department,
20000 Rotunda Dr., P.O. Box 2053, Dearborn, MI 48121-2053 alysis. These probabilistic methods have been developed
236
to convert deterministic problem formulations into prob- from deterministic “minimize” or “maximize” objectives
abilistic formulations to model and assess the effects of to include both mean performance and performance vari-
known uncertainties. Until very recently, however, the ation. With Taguchi methods, the goals are to drive mean
computational expense of probabilistic analysis of a sin- performance toward a target, “mean on target”, and
gle design point, in terms of the number of function evalu- to “minimize variance” of performance (Phadke 1989).
ations necessary to accurately capture performance vari- Taguchi methods employ metrics such as signal-to-noise
ation and estimate probability of failure, has made the ratio and loss function to achieve these goals (Ross 1996;
application of these methods impractical for all but aca- Phadke 1986). However, within Taguchi-based methods,
demic investigations or very critical cases (Thanedar and constraints are not formulated, as is typically done with
Kodiyalam 1991). Consequently, probabilistic optimiza- optimization formulations. Taguchi methods employ de-
tion has been considered prohibitively expensive, partic- sign of experiments (DOE) (Montgomery 1996) to eval-
ularly for complex, multidisciplinary engineering design uate potential designs. The best alternative with respect
problems. With the steady increases in computing power, to the chosen objective metrics is selected from among
large-scale parallel processing capabilities, and availabil- those evaluated. Numerical optimization is not generally
ity of probabilistic analysis and optimization tools and performed between evaluated design points.
systems, however, the combination of these technologies The rapidly growing current push in industry with
can facilitate effective probabilistic analysis and opti- respect to managing uncertainty and seeking “quality”
mization for complex design problems, allowing for the products is design for six sigma or DFSS. At the heart
identification of designs that qualify as not only feasible of DFSS methods is DOE and other statistical analy-
but as consistently feasible in the face of uncertainty. sis methods used to capture performance variation, with
The probabilistic/statistical “quality engineering” de- a focus on maintaining ±6-sigma (±6 standard devia-
sign methods developed in recent decades have come tions) of performance variation within design specifica-
from several different communities: structural reliability tion limits (Harry 1997; Harry and Schroeder 2000; Pande
(Thanedar and Kodiyalam 1991; Hasofer and Lind 1974; et al. 2000). Thus the focus here is again on constraints,
Rackwitz and Fiessler 1978; Hohenbichler and Rackwitz as with the probabilistic constraints in reliability-based
1981; Madsen et al. 1986; Thoft-Christensen and Baker design. With DFSS arising from manufacturing quality
1982; Cornell 1969; Melchers 1999), Taguchi robust qual- control initiatives, DOE with a minimum number of phys-
ity engineering (Byrne and Taguchi 1987; Ross 1996; ical experiments is often critical. Like Taguchi methods,
Phadke 1989) and, more recently, “six sigma” quality en- optimization is again generally not performed since phys-
gineering (Harry 1997; Harry and Schroeder 2000; Pande ical experiments are conducted; optimization is usually
et al. 2000). Each of these classes of approaches has a spe- not even mentioned in the few available DFSS references.
cific focus in probabilistic analysis and/or optimization. There is significant overlap between these different
Structural reliability analysis is geared toward assessing schools of probabilistic approaches for uncertainty assess-
the probability of failure of a design with respect to spec- ment. Designing for a quality level of 6-sigma with respect
ified structural and/or performance constraints and the to design specification limits is the equivalent of design-
evaluated variation of these constraint functions. Thus ing for a reliability level of 99.9999998% (probability of
a deterministic design problem is converted to a relia- failure of 0.0000002%) with respect to defined limit states
bility analysis problem by converting deterministic con- functions. However, each of these different types of prob-
straints into probabilistic constraints: abilistic approaches focuses on only part of a design or
optimization problem (constraints or objective only), and
g(X) ≤ 0 (1) optimization is often not included. With much of engin-
eering design today performed using computer simulation
becomes
models, and with the level of uncertainty prevalent in
Pf = P (g(X) > 0) ≤ P U , (2) both these models and the engineering problems them-
selves, the application of probabilistic analysis and opti-
where X is the set of design parameters, one or more of mization is not only feasible but critical for identifying
which are uncertain or vary randomly, Pf is the probabil- designs that perform consistently in the face of this un-
ity of failure for the constraint or limit state function g, certainty. What is needed is a complete formulation that
and P U is the required upper bound for the probability incorporates concepts from each of these schools of prob-
of failure (Melchers 1999). Reliability-based optimization abilistic analysis and design to facilitate comprehensive
thus focuses on the effects of input uncertainties, mod- probabilistic optimization.
eled through random variables with assigned probability In this paper, we present a six sigma-based robust de-
distributions and properties, upon satisfaction of proba- sign optimization formulation that combines approaches
bilistic constraints (Thanedar and Kodiyalam 1991; Bele- from structural reliability and robust design with the con-
gundu 1988; Choi et al. 1996; Chen et al. 1997; Xiao et al. cepts and philosophy of six sigma to facilitate compre-
1999; Koch and Kodiyalam 1999, Koch et al. 2000). hensive probabilistic optimization. Variability is incorpo-
With Taguchi-based quality engineering methods, the rated within all elements of this probabilistic optimiza-
focus is on performance objectives, which are expanded tion formulation – input design variable bound formula-
237
tion, output constraint formulation, and robust objective Table 1 Sigma level as percent variation and defects per mil-
formulation. An overview of the background and con- lion
cepts of six sigma and relations to reliability and robust-
Sigma Percent Defects Defects
ness is presented in the next section. In Sect. 3, we detail level variation per million per million
the six sigma-based robust optimization formulation, as (short term) (long term –
implemented within a commercial software design envi- 1.5 sigma shift)
ronment(Koch et al. 2002). In Sect. 4, this formulation
is demonstrated using a complex automotive application ±1σ 68.26 317 400 697 700
– vehicle side impact crash simulation. A discussion of ±2σ 95.46 45 400 308 733
±3σ 99.73 2700 66 803
results and current investigations is provided in closing
±4σ 99.9937 63 6200
in Sect. 5. ±5σ 99.999943 0.57 233
±6σ 99.9999998 0.002 3.4
2
Six sigma design quality: reliability and robustness ation was identified to lie within the set specification lim-
its, that was considered acceptable variation; in this case,
Six sigma is a quality philosophy at the highest level, re- 99.73% of the variation is within specification limits, or
lating to all processes, and a quality measure at the lowest the probability of meeting the requirements defined by
level. The term “sigma” refers to standard deviation, σ. these limits is 99.73%. In engineering terms, this proba-
Standard deviation or variance, σ2 , is a measure of disper- bility was deemed acceptable.
sion of a set of data around the mean value, µ, of this data. More recently, however, the 3σ quality level has been
This property can be used both to describe the known viewed as insufficient quality, initially from a manufactur-
variability of factors that influence a system (product or ing perspective and currently extending to an engineering
process) and as a measure of performance variability, and design perspective. Motorola, in defining “six sigma qual-
thus quality. Performance variation can be characterized ity” (Harry and Schroeder 2000), translated the sigma
as a number of standard deviations from the mean per- quality level to the number of defective parts per mil-
formance, as shown in Fig. 1. The areas under the nor- lion (ppm) parts being manufactured. In this case, as
mal distribution in Fig. 1 associated with each σ-level can be seen in Table 1, ±3σ corresponds to 2700 ppm de-
relate directly to the probability of performance falling in fective. This number was deemed unacceptable. Further-
that particular range (for example, ±1σ is equivalent to more, Motorola and others observed that even at some
a probability of 0.683). These probabilities are displayed observed variation level, mean performance could not be
in Table 1 as percent variation and number of defective maintained over time. If a part is to be manufactured to
parts per million parts. some nominal specification, plus/minus some specifica-
Quality can be measured using any of the variability tion limit, the mean performance will change and the dis-
metrics in Table 1 – “sigma level”, percent variation or tribution thus shift. A good example of this is tool wear.
probability (equivalent to reliability), or number of de- If a process is set up to manufacture a part to a nomi-
fects per million parts – by comparing the associated per- nal dimension of 10 in. with a tolerance of ±0.10 in., even
formance specification limits and the measured perform- if the process meets this nominal dimension on average,
ance variation. In Fig. 1, the lower and upper specifica- the cutting tool will wear with time, and the average part
tion limits that define the desired performance range are dimension will shift to, say, 10.05 in. This will cause the
shown to coincide with ±3σ from the mean. The design distribution of performance variation to shift while the
associated with this level of performance variance would specification limits remain fixed, and thus the area of the
be considered a “3σ” design. Is this design of acceptable distribution outside one of the specification limits will in-
quality? Traditionally, if ±3σ worth of performance vari- crease.
This shift was observed by Motorola and others to be
approximately 1.5σ and was used to define “long-term
sigma quality” as opposed to “short-term sigma qual-
ity”. This explains the last column in Table 1. While the
defects per million parts for the short term correspond
directly to the percent variation for a given sigma level
associated with the standard normal distribution, the de-
fects per million parts for the long term correspond to
a 1.5σ shift in the mean. In this case, 3σ quality leads to
66 803 defects per million, which is certainly undesirable.
Consequently, a quality goal of ±6σ was defined; hence
“six sigma quality” came to define the desired level of ac-
ceptable performance variation. With this quality goal,
Fig. 1 Normal distribution, 3-σ design the level of defects per million parts, as shown in Fig. 1, is
238
0.002 for short-term quality and 3.4 for long-term quality; The reliability can be directly related to the sigma level:
both are acceptable quality levels. a short-term sigma level of ±3σ is equivalent to a reliabil-
The focus on achieving six sigma quality is commonly ity of 99.73%. The term “robustness”, on the other hand,
referred to as design for six sigma (DFSS): striving to refers simply to the amount of performance variability. In
maintain six standard deviations (µ ± 6σ) of performance the robust engineering design context, robustness is de-
variation within the defined acceptable limits, as illus- fined as the sensitivity of performance parameters to fluc-
trated in Fig. 2. In Fig. 2a and b, the mean, µ, and lower tuations in uncertain design parameters. This sensitivity
specification limit (LSL) and upper specification limit is captured through performance variability estimation.
(USL) on performance variation are held fixed. Figure 2a The fundamental motive underlying robust design is to
represents the 3σ design of Fig. 1; ±3σ worth of perform- improve the quality of a product or process by not only
ance variation is within the defined specification limits. striving to achieve performance targets or goals (“mean
To achieve a 6σ design, one for which the probability that on target”) but also by minimizing performance variation
the performance will remain within the set limits is essen- (Chen et al. 1996).
tially 100%, the performance variation must be reduced In implementing a six sigma approach in engineer-
(reduced σy ), as shown in Fig. 2b. Note that while 6σ is ing design, to design for six sigma, performance variabil-
a measure of design quality, six sigma is a quality im- ity is measured and, if needed, improved with respect to
provement process and philosophy; a six sigma process the two mentioned quality goals by shifting the perform-
does not imply or guarantee a 6σ design. ance distribution relative to the constraints or specifi-
In an engineering design context, these concepts of cation limits (thus improving reliability) and by shrink-
six sigma quality, measurement, and improvement can
be related to two probabilistic design measures: reliabil-
ity and robustness. The two goals in designing for quality
are: (1) striving to maintain performance within accept-
able limits, consistently (reliability), and (2) striving to
reduce performance variation and thus increase robust-
ness. Reliability is defined as the probability of satisfying
constraints; conversely, the probability of failure, prob-
ability of not satisfying constraints, is often measured.
Fig. 2 Design for six σ (LSL, USL: lower, upper specification Fig. 3 Design for six sigma: reliability “shift” and robust
limits) “shrink”
239
This formulation also facilitates the optimization of de- bilistic distributions and associated properties defined for
sign tolerances. By defining the variance or standard de- each.
viation of uncertain design parameters not as fixed but Several sampling techniques exist for simulating a pop-
as design variables themselves, tolerance design optimiza- ulation; the two techniques implemented in this work
tion is implemented by seeking standard deviation set- are simple random sampling (Hahn and Shapiro 1994),
tings for these parameters that produce acceptable per- the traditional approach to Monte Carlo, and descriptive
formance variation (objective components, constraints). sampling (Saliby 1990; Ziha 1995), a more efficient vari-
Note that when implementing tolerance design, a cost ance reduction technique. These two techniques are com-
model is essential to reflect the cost of reducing tolerances pared in Fig. 4. With simple random sampling, the tradi-
so as to evaluate the performance vs. cost tradeoff (to tional MCS approach shown in Fig. 4a, sample points are
prevent the optimizer from reducing tolerances to zero in generated randomly from each distribution, as its name
attempt to reduce performance variation). implies. Sufficient sample points (often thousands or tens
The key to implementing this six sigma-based robust of thousands) must be taken to ensure the probability
optimization formulation is the ability to estimate per- distributions are fully sampled.
formance variability statistics to allow the reformulation Variance reduction sampling techniques have been de-
of constraints and objectives as defined in Eqs. 3–7. Sev- veloped to reduce the sample size (number of simulations)
eral methods employed in this work for estimating per- without sacrificing the quality of the statistical descrip-
formance variation are discussed in the next section. tion of the behavior of the system. Descriptive sampling
is one such sampling technique. In this technique, the
probability distribution of each random variable is di-
3.2
vided into subsets of equal probability and the analysis
Quality assessment: sampling methods for measuring
is performed with each subset of each random variable
variation
only once (each subset of one random variable is com-
Estimation of performance variability requires the defin- bined with only one subset of every other random vari-
ition of uncertain design parameters and their expected able). This sampling technique is similar to Latin hy-
variability that leads to this performance variability. Un- percube experimental design techniques and is best de-
certainty in design parameters can be characterized in scribed through illustration as in Fig. 4b for two random
many ways: through distributions and associated sta-
tistical properties (as random variables are often char-
acterized) (Hahn and Shapiro 1994), by a range of ex- x2
pected variation (low and high values, as noise factors in
Taguchi’s robust design are commonly defined) (Thoft-
Christensen and Baker 1982), or by a delta or percent
variation around a nominal, baseline value.
Given the definition of uncertain parameters and their
variability, the resulting variability of performance pa-
rameters can be measured. Many techniques exist for
fX2(x2)
estimating performance variability; the three employed
within the six sigma robust optimization procedure are
x1
discussed here: fX1(x1)
– Monte Carlo simulation,
– Design of experiments, (a) Simple Random Sampling
– Sensitivity-based variability estimation.
Monte Carlo simulation x2
Monte Carlo simulation (MCS) techniques are im-
plemented by randomly simulating a design or process,
given the stochastic properties of one or more random
variables, with a focus on characterizing the statistical
nature (mean, variance, range, distribution type, etc.)
of the responses (outputs) of interest (Hammersley and fX2(x2)
Handscomb 1964). Monte Carlo methods have long been
recognized as the most exact method for all calculations x1
that require knowledge of the probability distribution of
responses of uncertain systems to uncertain inputs. To fX1(x1)
implement a MCS, a defined number of system simula-
(b) Descriptive Sampling
tions to be analyzed are generated by sampling values of
random variables (uncertain inputs) following the proba- Fig. 4 Monte Carlo sampling comparison
241
variables. Each row and column in the discretized two- the defined levels of each parameter. Consequently, this
variable space is sampled only once, in random combina- approach represents a higher level of uncertainty in uncer-
tions and random order. Only seven points/subsets are tain design parameters, with expected ranges rather than
shown in Fig. 4b for clarity in illustration; obviously more specific distributions. Given data from a designed experi-
points are necessary for acceptable estimation, but often ment, capturing ranges of uncertain parameters, robust-
an order of magnitude fewer sample points are necessary ness or quality can be assessed by using response statistics
when compared to simple random sampling. to determine sigma level, percent variation within speci-
MCS is also the traditional method of reliability an- fication limits, probability of meeting specification limits,
alysis. The probability of failure is estimated simply as or defects per million parts based on defined specification
the ratio of the number of points violating constraints limits.
(points on failure side of failure function, or limit state The computational cost of implementing DOE de-
function, g(X) > 0) to the total number of sample points: pends on the particular experiment design chosen and the
Pf = Nf ailed /Ntotal . The reliability is then 1 − Pf , and levels of the uncertain parameters but is generally signifi-
the design sigma level can be calculated directly from cantly less than that of MCS. However, the estimates may
this reliability. Other information gathered through MCS not be as accurate. This method is recommended when
includes response statistical information (including stan- distributions are not known and can also be used when
dard deviation/variance for assessing robustness), visual uncertain parameters follow a uniform distribution.
response scatter, response PDF/CDF (probability dens- Sensitivity-based variability estimation
ity function, cumulative distribution function) informa- A third approach to estimating performance variabil-
tion, and a measure of the relative effects of each random ity is a sensitivity-based approach, based on Taylor’s se-
variable on a response (identifying key random variables ries expansions. In this approach, rather than sampling
that drive response variability). A sample of such results across known distributions or ranges for uncertain design
is presented in Fig. 5. parameters, gradients for performance parameters are
Design of experiments (DOE) taken with respect to the uncertain design parameters,
A second approach to estimating performance vari- hence the “sensitivity-based” nature of the approach.
ability due to uncertain design parameters is through Generally, with this approach the Taylor’s expansion is
more structured sampling, using a designed experiment. either first order, neglecting higher-order terms, or sec-
In DOE, a design matrix is constructed, in a system- ond order. Obviously there is a tradeoff between expense
atic fashion, that specifies the values for the design pa- (number of gradient calculations) and accuracy when
rameters (uncertain parameters in this context) for each choosing to include or neglect the higher-order terms in
sampled point or experiment. A number of experimen- the expansion. The first-order and second-order formula-
tal design techniques exist for efficiently sampling values tions are given as follows.
of design parameters; for details on experimental design First-order Taylor’s expansion. Neglecting higher-
techniques, see Mongomery (1996). order terms, the Taylor’s series expansion for a perform-
With DOE, potential values for uncertain design pa- ance response, Y , is:
rameters are not defined through probability distribu-
dY
tions but rather are defined by a range (low and high Y (x) = y + x. (8)
values), a nominal baseline plus/minus some delta or per- dx
cent, or through specified values or levels. In this case, The mean of this performance response is then calculated
each run in the designed experiment is a combination of by setting the uncertain design parameters to their mean
value, µx :
µy = Y (µx ) (9)
Consequently, this approach is significantly more efficient technology that has made this sampling for quality as-
than MCS and often more efficient than DOE while in- sessment and improvement more feasible for real engin-
cluding distribution properties. However, the approach eering design problems is parallel processing (Koch et
loses accuracy when responses are not close to linear in al. 2000). The sample points defined by MCS or DOE
the region being sampled. This method is therefore rec- are independent points, and the complete set can be de-
ommended when responses are known to be linear or close fined prior to execution and thus executed in parallel. For
to linear and also when computational cost is high and the first- or second-order Taylor’s expansion sensitivity
rough estimates are acceptable. methods, if finite differencing is used for calculating the
Second-order Taylor’s expansion. Adding the second- partial derivatives of the performance responses with re-
order terms, the Taylor’s series expansion for a perform- spect to each random variable, these gradient points are
ance response, Y , is: also known prior to execution and can be executed in par-
allel. Thus any number of machines and/or processors can
dY 1 d2 Y be used to expedite the sampling procedure.
Y (x) = y + x + xT x. (11)
dx 2 dx2 This six sigma-based robust optimization formulation
The mean of this performance response is then obtained is demonstrated in the next section for an automotive
by taking the expectation of both sides of the expansion design for a crashworthiness example. Results obtained
are compared to deterministic optimization results and
1 d2 Y 2
n
to results obtained using a reliability-based optimiza-
µY = Y (µx ) + ,σ (12)
2 i=1 dx2i xi tion approach, focusing only on probability of constraint
satisfaction.
and the standard deviation of Y (X) is given by:
σY = 4
n Application: automotive crashworthiness
n ∂Y 2
1
n
∂2Y
2
(σxi )2 + (σxi )2 (σxj )2 ,
i=1
∂xi 2 i j
∂xi ∂xj One quality engineering design application currently of
(13) high visibility in the automotive industry is vehicle struc-
tural design for crashworthiness. These design problems
where σXi is the standard deviation of the i-th parameter are not only particularly complex in terms of understand-
and σXj is the standard deviation of the j-th parame- ing the problem and defining design requirements and de-
ter. For more details on this approach for variability es- sign alternatives, but they also involve a very high degree
timation and its use for robust design, see Hsieh and Oh of uncertainty and variability: velocity of impact, mass of
(1992). vehicle, angle of impact, and mass/stiffness of barrier are
Since second-order derivatives of responses, includ- just a few of many uncertain parameters. A balance must
ing crossed terms, with respect to random variables are be struck in designing for crashworthiness between de-
needed in Eq. 13, and the mean value point is needed signing the vehicle structure to absorb/manage the crash
in Eq. 12, the second-order Taylor’s expansion estimates energy (through structure deformation) and maintain-
require (n + 1)(n + 2)/2 analyses for evaluation. Conse- ing passenger compartment integrity, all in the face of
quently, this approach is significantly more computation- uncertainty.
ally expensive than the first-order Taylor’s expansion and With the expense of physical crash testing and with
usually will be more expensive than DOE. For low num- recent increases in computing power and improvement in
bers of uncertain parameters, this approach can still be numerical modeling techniques, much of crash analysis is
more efficient than MCS but becomes less efficient with done today using computer simulation models. However,
increasing n. (MCS is not dependent on the number of these simulations bring a different kind of expense. Ve-
parameters; for 100 parameters, second-order Taylor’s ex- hicle finite element models for crash simulation can be
pansion would require 5151 sample points, which may be upwards of 150 000 elements. Processing can range from
more than required by MCS for the same level of accu- 20 h to over 150 h for a single analysis. Even with paral-
racy in the estimates.) For efficiency in this implemen- lel processing, the expense of crash simulations prohibits
tation, the cross terms of Eq. 13 are ignored, and only extensive optimization, let alone probabilistic optimiza-
pure second-order terms are included (diagonal terms). tion. In addition, solver instability often prevents success-
The number of necessary evaluations is then reduced to ful application of sequential (optimization) methods since
2n + 1. The second-order Taylor’s expansion approach is runs will often fail. It is essential to create approximate
recommended when curvature exists and the number of models from a number of allowable actual simulations.
uncertain parameters is not excessive. In this paper, one specific crash scenario is investi-
Any of the sampling methods summarized in this sec- gated – side impact – using the six sigma robust opti-
tion can be used to estimate performance variability. mization formulation presented in the previous section.
These estimates can then be used within the robust op- A typical vehicle side impact model is shown in Fig. 6. In-
timization formulation to improve design quality. One cluding a finite element dummy model, the total number
243
erties. The final two parameters in Table 3, the barrier surface models) and a solution is obtained after 30 eval-
height and hitting position, are pure random variables uations of the response surface models. The side impact
that are continuous and vary from −30 mm to 30 mm ac- deterministic optimization results are compared to the
cording to the physical test. These pure random variables baseline design in Table 4. As seen in this table, the
are used only in the reliability and robustness assessment optimization solution results in a weight reduction of
(fixed during deterministic optimization at given baseline 5.58 kg but also in three active constraints: the lower rib
values). deflection constraint, pubic symphysis force, and door
Before running deterministic optimization with this velocity.
side impact formulation, a sample of 40 crash simu- When a reliability analysis is performed at this design
lations were executed, in parallel, based on a uniform solution using Monte Carlo descriptive sampling with
Latin hypercube design experiment. A stepwise regres- 2000 points, the reliability values shown in the fourth col-
sion approach was used to fit polynomial response surface umn of Table 4 are obtained. As expected with the three
models to the obtained data, and following verification active constraints, the reliability values are less than de-
of these models they were used to obtain the results sirable: 49.65% for the lower rib deflection constraint,
presented in this paper. For the deterministic optimiza- 39.25% for pubic symphysis force, and 67.45% for door
tion, a sequential quadratic programming (SQP) algo- velocity. To improve the reliability associated with these
rithm is employed (appropriate for quadratic response active constraints, probabilistic optimization is imple-
245
4.1
Side impact reliability-based optimization
Minimize Weight
Subject to Pr[Abdomen load > 1.0] ≤ 10%
Pr[Rib Deflection > 32] ≤ 10%
Pr[V*C > 0.32] ≤ 10%
Pr[FPS > 4.0] ≤ 10%
Pr[VB-pillar > 9.9] ≤ 10%
Pr[VDoor > 15.70] ≤ 10%
Here, “nσ” can be any desired sigma level. Recall that Table 5 is obtained. Once again the SQP algorithm is em-
the “six” in “six sigma” does not imply that only 6σ so- ployed with the second-order Taylor’s method for statis-
lutions are sought in a six sigma process (6σ is a quality tic estimations. This robust optimization is performed
measure, six sigma is a quality improvement philosophy). starting with the 3σ solution and requires an additional
A six sigma process seeks to explore the tradeoff between 225 evaluations of the response surface models. For this
performance and quality and to investigate the quality problem, given its current formulation and requirements,
levels achievable for a given design. In many cases, 6σ 6σ quality is not achievable for all constraints. While the
quality is not achievable for a given design concept, con- quality level is increased for every constraint, five of the
figuration, and/or set of requirements; in many cases the ten constraints do not satisfy the 6σ level. In this case,
cost of achieving 6σ quality is not bearable with current the design variables are driven to their bounds and thus
technology. New technology is not investigated in this this solution represents the best quality allowable given
side impact problem. In this investigation, 3σ and 6σ so- the defined design space. The quality driver in this case
lutions are sought (n = 3 and n = 6 in the six sigma robust is the pubic symphysis force, with the lowest quality level
optimization formulation above). of 3.76σ. For comparison with Fig. 7b, the quality level
The results obtained after executing this robust opti- for pubic symphysis force is shown graphically in Fig. 8a
mization formulation with 3σ and 6σ quality constraints for the 3σ formulation results and in Fig. 8b for the 6σ
are presented in Table 5. The robust optimization prob- formulation results. Notice in Fig. 8 (as well as in Fig. 7)
lems are solved employing an SQP algorithm, coupled that the overall quality or sigma level displayed is dif-
with the second-order Taylor method of Sect. 3.2 for ferent than the sigma position of the upper bound. The
estimating the mean and standard deviation of all out- position of the upper bound, as a number of standard de-
puts. For the 3σ formulation, robust optimization is viations, or “sigmas”, from the mean, is not equivalent to
performed starting from the deterministic optimization the sigma level or overall quality level with respect to this
solution given in Table 4. A total of 1236 evaluations of constraint. With only one bound present, “sigma level”
the response surface models are required to obtain this is a metric that reflects the percent variation outside the
solution, and the desired 3σ quality level is achieved for bound, the probability of failure, and the equivalent ±nσ.
all constraints. The three active constraints after deter- The total probability of failure is the consistent basis for
ministic optimization drive the 3σ solution, as shown the quality measurement; with a single upper bound at
in Table 5. The tradeoff results in a weight increase 3.58σ as in Fig. 8b, the percent of the distribution vari-
of 5.92 kg over the deterministic optimization starting ation outside the constraint is equivalent to that with
point, 0.34 kg over the baseline design. The standard de- symmetric lower and upper bounds at ±3.8σ around the
viation of weight does not change (0.36 kg at both the mean, and thus the overall quality level is 3.8σ.
deterministic optimization solution and the 3σ solution), The tradeoff between performance and quality for
and thus the objective robustness is not affected. this side impact crashworthiness problem is captured in
When the six sigma robust optimization formulation Fig. 9, where the sigma level quality of each solution pre-
is changed to seek 6σ quality with respect to all con- sented – deterministic optimum, 90% reliability solution,
straints, the solution shown in the last two columns of 3σ solution, and 6σ formulation solution – is plotted
tion is achieved, with a weight essentially equal to that Koch, P.N.; Evans, J.P.; Powell, D. 2002: Interdigitation for
of the baseline design. In pushing the limits of achiev- Effective Design Space Exploration using iSIGHT. Journal of
able quality in formulating 6σ quality constraints, only Structural and Multidisciplinary Optimization 23(2), 111–126
3.76σ is achievable with the current problem formula- Koch, P.N.; Kodiyalam, S. 1999: Variable Complexity Struc-
tion. However, side impact is only one crash scenario. tural Reliability Analysis for Efficient Reliability-Based De-
Future investigations are necessary to combine various sign Optimization. 40th AIAA/ASME/ASCE/AHS/ASC
crash modes – front impact, rear impact, roof crush – for Structures, Structural Dynamics, and Materials Conference,
full vehicle robust design for crashworthiness. St. Louis, Missouri, 68–77. AIAA-99-1210
Koch, P.N.; Wujek, B.; Golovidov, O. 2000: A Multi-Stage,
References Parallel Implementation of Probabilistic Design Optimization
in an MDO Framework. 8th AIAA/USAF/NASA/ISSMO
Athan, T.W.; Papalambros, P.Y. 1996: A Note on Weighted Symposium on Multidisciplinary Analysis and Optimization,
Criteria Methods for Compromise Solutions in Multi-Object- Long Beach, CA., AIAA-2000-4805
ive Optimization. Engineering Optimization 27(2), 155–176
Madsen, H.O.; Krenk, S.; Lind, N.C. 1986: Methods of Struc-
Belegundu, A.D. 1988: Probabilistic Optimal Design Using tural Safety. New Jersey: Prentice-Hall, Englewood Cliffs
Second Moment Criteria. Journal of Mechanisms, Transmis-
sions, and Automation in Design 110(3), 324–329 Melchers, R.E. 1999: Structural Reliability: Analysis and Pre-
diction, 2nd edn. Ellis Horwood Series in Civil Engineering.
Byrne, D.M.; Taguchi, S. 1987: The Taguchi Approach to Pa- New York: John Wiley & Sons
rameter Design. 40th Annual Quality Congress Transactions.
Milwaukee, Wisconsin: American Society for Quality Control, Montgomery, D.C. 1996: Design and Analysis of Experiments.
19–26 New York: John Wiley & Sons
Chen, W.; Allen, J.K.; Tsui, K.-L.; Mistree, F. 1996: A Pro- Pande, P.S.; Neuman, R.P.; Cavanagh, R.R. 2000: The Six
cedure for Robust Design: Minimizing Variations Caused by Sigma Way: How GE, Motorola, and Other Top Companies
Noise Factors and Control Factors. ASME Journal of Mechan- are Honing Their Performance. New York: McGraw-Hill
ical Design 118(4), 478–485 Phadke, M.S. 1989: Quality Engineering using Robust Design.
Chen, X.; Hasselman, T.K.; Neill, D.J. 1997: Reliability Based New Jersey: Prentice Hall, Englewood Cliffs
Structural Design Optimization for Practical Applications. Rackwitz, R.; Fiessler, B. 1978: Structural Stability Under
38th AIAA/ASME/ ASCE/AHS/ASC, Structures, Structural combined Random Load Sequences. Computers and Struc-
Dynamics and Materials Conference, 2724–2732. Kissimmee, tures 9, 489–494
FL. Paper No. AIAA-97-1403
Ross, P.J. 1996: Taguchi Techniques for Quality Engineering.
Choi, K.K.; Yu, X.; Chang, K. 1996: A Mixed Design Ap- 2nd edn., New York: McGraw-Hill
proach for Probabilistic Structural Durability. 6th AIAA/
USAF/NASA/ ISSMO Symposium on Multidisciplinary An- Saliby, E. 1990: Descriptive Sampling: A Better Approach to
alysis and Optimization, Bellevue, WA., 785–795 Monte Carlo Simulation. J. Opl. Res. Soc. 41(12), 1133–1142
Cornell, C.A. 1969: A Probability-Based Structural Code. Simpson, T.W.; Allen, J.K.; Mistree, F. 1998: Spatial Corre-
Journal of American Concrete Institute 66(12), 974–985 lation Metamodels for Global Approximation in Structural
Design Optimization. ASME Design Engineering Technical
Hahn, G.J.; Shapiro, S.S. 1994: Statistical Models in Engineer-
Conferences, Atlanta, GA, ASME DETC98/DAC-5613
ing. Wiley-Interscience, New York: John Wiley & Sons
Simpson, T.W.; Mauery, T.M.; Korte, J.J.; Mistree, F. 2001:
Hammersley, J.M.; Handscomb, D.C. 1964: Monte Carlo
Kriging Models for Global Approximation in Simulation-
Methods. London: Chapman and Hall
Based Multidisciplinary Design Optimization. AIAA Journal
Harry, M.J. 1997: The Nature of Six Sigma Quality. Shaum- 39(12), 2233–2241
burg, Illinois: Motorola University Press.
Thanedar, P.B.; Kodiyalam, S. 1991: Structural Optimization
Harry, M.; Schroeder, R. 2000: Six Sigma: The Breakthrough using Probabilistic Constraints, AIAA/ASME/ACE/AHS/
Management Strategy Revolutionizing the World’s Top Corpo- ASC Structures. Structural Dynamics and Materials Confer-
rations. New York: Doubleday ence, 205–212, AIAA-91-0922-CP
Hasofer, A.M.; Lind, N.C. 1974: Exact and Invariant Second Thoft-Christensen, P.; Baker, M.J. 1982: Structural Reliability
Moment Code Format. Journal of Engineering Mechanics, Theory and Its Applications. Berlin: Springer-Verlag
ASCE 100, No. EM1, 111–121
Xiao, Q.; Sues, R.H.; Rhodes, G.S. 1999: Multi-Disciplinary
Hohenbichler, M.; Rackwitz, R. 1981: Non-normal Dependent Wing Shape Optimization with Uncertain Parameters. 40th
Vectors in Structural Safety. Journal of Engineering Mechan- AIAA/ASME/ASCE/ AHS/ASC Structures, Structural Dy-
ics, ASCE 107, NO. EM6, 1227–1237 namics, and Materials Conference, St. Louis, Missouri,
3005–3014. Paper No. AIAA-99-1601
Hsieh, C.-C.; Oh, K.P. 1992: MARS: A computer-based
method for achieving robust systems, FISITA Conference. Ziha, K. 1995: Descriptive sampling in structural safety.
The Integration of Design and Manufacture 1, 115–120 Structural Safety 17, 33–41