You are on page 1of 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/228934887

A New Variable-Fidelity Optimization Framework Based on Model Fusion and


Objective-Oriented Sequential Sampling

Article  in  Journal of Mechanical Design · November 2008


DOI: 10.1115/1.2976449

CITATIONS READS

77 232

3 authors:

Ying Xiong Wei Chen


Northwestern University Northwestern University
13 PUBLICATIONS   560 CITATIONS    437 PUBLICATIONS   13,925 CITATIONS   

SEE PROFILE SEE PROFILE

Kwok-Leung Tsui
City University of Hong Kong
284 PUBLICATIONS   8,538 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Uncertainty Quantification View project

General Research Fund: Reliability and Degradation Modeling for Rechargeable Battery View project

All content following this page was uploaded by Ying Xiong on 20 May 2014.

The user has requested enhancement of the downloaded file.


Proceedings of IDETC/CIE 2007
ASME 2007 International Design Engineering Technical Conferences & Computers and Information in
Engineering Conference
September 4-7, 2007, Las Vegas, Nevada, USA

DETC2007-35782

A NEW VARIABLE FIDELITY OPTIMIZATION FRAMEWORK BASED ON MODEL FUSION AND


OBJECTIVE-ORIENTED SEQUENTIAL SAMPLING
Ying Xiong, Wei Chen1 Kwok-Leung Tsui
Department of Mechanical Engineering School of Industrial & Systems Engineering
Northwestern University Georgia Institute of Technology

multiscale material design [4,5], models based on direct


ABSTRACT numerical simulations become prohibitively expensive due to
Computational models with variable fidelity have been widely the vast scale difference among the lowest (atomic) to the
used in engineering design. To alleviate the computational highest (macroscopic) scale. While model reduction approaches
burden, surrogate models are used for optimization without are available, there is a need for variable fidelity methods that
recourse to expensive high-fidelity simulations. In this work, a provide seamless integration between high and low fidelity
model fusion technique based on Bayesian Gaussian process simulations. A critical issue in variable fidelity optimization is
modeling is employed to construct cheap, surrogate models to to effectively use limited HF simulations to improve the
integrate information from both low-fidelity and high-fidelity predictions and optimization solutions from using LF models.
models, while the interpolation uncertainty of the surrogate
model due to the lack of sufficient high-fidelity simulations is In the conventional surrogate model based optimization, a
quantified. In contrast to space filling, the sequential sampling surrogate model (also called metamodel) is built based purely
of a high-fidelity simulation model in our proposed framework on the HF simulations, then followed by optimization using the
is objective-oriented, aiming for improving a design objective. surrogate. In the context of variable fidelity optimization, the
Strategy based on periodical switching criteria is studied which amount of HF simulation data is typically far from enough.
is shown to be effective in guiding the sequential sampling of a Consequently, the optimal solution might not be trustworthy.
high-fidelity model towards improving a design objective as To overcome this limitation, model management strategies,
well as reducing the interpolation uncertainty. A design e.g., trust-region or moving limit based approaches [1] [3],
confidence (DC) metric is proposed to serves as the stopping which construct an approximation to an HF model by
criterion to facilitate design decision making against the incorporating the information from a LF model, have been
interpolation uncertainty. Numerical and engineering examples developed to guarantee the local convergence with less
are provided to demonstrate the benefits of the proposed computational expense on HF simulation. Such an
methodology. approximation could be either a local approximation, e.g.,
1
Taylor series and polynomial based response surface approach,
KEYWORDS or a global approximation, e.g., Kriging modeling approach [6].
Variable fidelity, Optimization, Sequential sampling, Surrogate With the global approximation, all historical sampling points
model, Bayesian approach, Model fusion, Design confidence are used to construct the surrogate model. One advantage of the
trust-region based variable fidelity optimization framework is
1. INTRODUCTION that the convergence of a local minimum using the surrogate
Variable fidelity methods have been developed to solve model is theoretically guaranteed [1]. However, the trust-region
optimization problems that involve simulations with extreme based approach may fail to find the global optimum if multiple
computational expenses [1-3]. Examples of variable fidelity local optimums exist.
models in aircraft aerodynamic analysis include low fidelity
(LF) models, such as the classical aerodynamics or linearized In this work, a new variable fidelity optimization approach is
supersonic panel code, and high fidelity (HF) simulations, such proposed based on the Bayesian surrogate modeling for model
as the finite element based Euler/Navier-Stokes solver. In fusion and the objective-oriented sequential sampling. There
are two fundamental differences between our method and some
1
traditional trust-region based methods. First, when using the
Corresponding author. Department of Mechanical Engineering,
Northwestern University, 2145 Sheridan Road, Tech B224, Evanston, IL
trust-region based variable fidelity optimization, the
60208-3111, Email: weichen@northwestern.edu, Phone: (847) 491-7019, Fax: optimization search is over the surrogate approximation that is
(847) 491-3915.

1 Copyright © 2007 by ASME


constrained within a so-called trust-region, the size of which is oriented approaches were applied to surrogate models built
adjusted depending upon the accuracy of the local only using HF simulations. In this work, we extend the same
approximation in the previous iteration. In contrast, with our principle to surrogate models built based on both LF and HF
approach, a global optimization is performed over the global simulations. We also investigate a new periodical switching
surrogate model that is updated using new sampling points criterion to effectively guide the sequential high-fidelity
from the HF model. Second, for the trust-region based variable simulations tailored towards both improving a design objective
fidelity optimization, the sampling point is identified at the site as well as reducing interpolation uncertainty. Another
of the optimum of the local approximation, whereas in our important feature of the proposed sequential procedure is the
approach the new sampling site is identified by the objective- use of the design confidence (DC) as a metric to assist
oriented sampling criteria. designers in making the decision regarding when to terminate
the sampling process so that the current optimum design can be
How to best integrate LF and HF models in constructing a accepted as the ‘true optimal’ solution with sufficient
surrogate model is one of the key issues in variable fidelity confidence.
optimization. Bayesian style modeling approaches have been
developed (e.g., Qian et al. [7]). A similar mathematical 2. PROPOSED METHODLOGY
framework has been followed to combine the results from
physical and computer experiments in our earlier work on 2.1 The Proposed Variable Fidelity Optimization
model validation [8]. We demonstrate in [9] that the combined Framework
models always have a better accuracy than using the surrogates
of either computer or physical experiments alone. Similar Low fidelity model yl(x) High fidelity model yh(x)
conclusion can be derived for combining results from HF and
LF simulations. Associated with surrogate model is the need for Surrogate of yl(x) Sampling Objective-
quantifying the interpolation uncertainty, also known as oriented
prediction uncertainty [10]. Interpolation uncertainty, which is sequential
Surrogate model ys(x) with interpolation
used to account for the lack of data and knowledge in uncertainty quantification sampling
constructing a surrogate model, plays an important role in
sequential sampling, with more details provided in the next Optimization on yˆ s (x)
paragraph. Extensive efforts have been made to quantify such
uncertainty. Among them, the Gaussian process (GP) models Optimum design x*
(Kriging model as a special case) are widely used. Apley et al.
[10] developed an analytical approach to quantifying the impact
No
of interpolation uncertainty on a robust design objective. In our Meet design confidence?
proposed approach, the constrained linear scaling method is Yes
first applied to match the LF model to the HF data, followed by
Accept optimal design x*
using a Bayesian Gaussian process to account for the remaining
discrepancy and to quantify the interpolation uncertainty. Figure 1. Proposed Variable Fidelity Optimization Framework

The other important issue associated with a surrogate model The proposed variable-fidelity optimization framework in this
based design optimization is the sequential updating of the work is depicted in Figure 1. In contrast to the classical variable
surrogate model with additional data, or called sequential fidelity optimization strategy, none of the LF and HF models is
sampling. One category of existing sequential sampling directly invoked during optimization. Instead, a model fusion
techniques aims at global accuracy in a space-filling fashion, technique is applied to combine information from both LF
e.g., based on criteria such as the Integrated Mean-Squared model yl(x) and HF simulations yh(xi) (i=1,…,N) to yield a
Error, MaxiMin/MiniMax, and Maximum Entropy. Recent surrogate model ys(x), over which optimization is performed.
studies [11-16] have proposed adaptive strategies based on the The proposed model fusion approach follows a Bayesian
knowledge obtained from the preceding surrogate model. modeling framework, in which the surrogate model is a
However, it is generally unaffordable to conduct enough combination of an augmented LF model with linear scaling plus
simulation runs to cover the entire input space when a bias function that characterizes the remaining difference with
simulations are expensive. A more efficient sequential the HF model. With the Bayesian approach, the uncertainty of
sampling strategy is the so-called objective-oriented approach ys(x) in predicting HF models can be quantified. As previously
that brings the design objective into account. The initial stated, such type of uncertainty is called the ‘interpolation
utilizations of the objective-oriented approach and the concept uncertainty’ due to the lack of sufficient HF simulations. When
of interpolation uncertainty could be seen in the Efficient a LF model is cheap to run, it will be directly used for model
Global Optimization (EGO) algorithm developed by Jones et al. fusion without fitting a surrogate model to replace it. When a
[17] using the idea of expected improvement (EI) originally LF simulation is expensive, the surrogate of LF model (dashed
proposed by Mockus et al. [18]. Although EI was demonstrated line box in Figure 1) could be used to replace the original LF
to be well suited for the global deterministic optimization, model. After the surrogate model ys(x) is obtained based on
alternative sequential sampling criteria [19] [13] were model fusion, design optimization is performed using the
investigated and shown to have various merits in making the predictor (or posterior mean) of the Bayesian surrogate model
trade-off between optimizing a design objective and reducing ys(x). The design confidence (DC) of the newly identified
the interpolation uncertainty. All of these existing objective- optimum x* will then be assessed considering the interpolation

2 Copyright © 2007 by ASME


uncertainty associated with ys(x). If the design confidence scaling (CLS) approach. Note that if the bound constraints on
meets a satisfactory level, or when the computing resource has ρ0 and ρ1 are both ignored, the solution to Eq. (2) will be
been exhausted, the sequential sampling process is terminated. exactly the same as that from a regular regression.
Otherwise, an objective-oriented sequential sampling procedure
will be applied to pick new samples of HF simulations. In the To simplify the Bayesian modeling, the scaling parameters ρ0
proposed framework, only one sampling point of HF simulation and ρ1 are assumed to be unknown but fixed in this work. This
is added at each iteration (or called ‘stage’). treatment is different from [7], where a linear function ρ(x) is
used for scaling instead of two constant parameters ρ0 and ρ1
2.2 Model Fusion and Quantification of Interpolation used in our work. It is our belief that using ρ1will help better
Uncertainty preserve the ‘profile’ of a LF model and the term ρ0 will help
The goal of model fusion is to integrate the information from satisfy the assumption of ‘zero-mean’ prior associated with the
both LF and HF models to build a surrogate for predicting the bias function δ(·), especially if a global bias exist between LF
design behavior. Existing model fusion techniques include, but and HF models.
are not limited to, the difference or scaling approach [3], the
Taylor-series approach [6], and the space mapping approach The bias function δ(·) accounts for the remaining discrepancy
[20], etc. Due to the lack of expensive HF simulation data in between the HF simulation data and the scaled LF model. For
most practical problems, a Bayesian approach to model fusion modeling the bias function, it is assumed that δ(·) follows a
and quantification of interpolation uncertainty is considered in
Gaussian process (GP) with mean function βT F(x) , and a
this work. Among the existing Bayesian modeling approaches,
the Gaussian process (GP) models (e.g., Kriging as a special covariance function C(σ, θ), where σ and θ denote the variance
case) have gained popularity due to their convenience and and correlation parameters, respectively. The sampling points
flexibility in interpolating deterministic computer data. With for δ(·) are evaluated by
Gaussian process models, the mean and covariance functions δ (xi ) = y h (xi ) − yˆ sl (x) = y h (xi ) − [ ρˆ 0 + ρˆ y l (xi )] . (3)
are specified to reflect prior knowledge about the unknown It is observed that when data are far from sufficient to explore
function. The approach to combining LF and HF simulations the behavior of the true HF model performance, a common
described bellow follows the similar mathematical framework Kriging approach [11] [7] would typically underestimate the
as combining computer and physical experiments in our earlier true interpolation uncertainty [22], because all Gaussian
work on model validation [8]. process parameters (β, σ2, θ) are treated as unknown but fixed
and are determined through methods like the maximum
In this work, the mathematical relationship between the HF likelihood estimation (MLE) [23] and the cross validation (CV)
model yh(x) and the LF model yl(x) is represented as [21]. With the Bayesian approach, the prior knowledge of the
y h (x) = ysl (x) + δ (x) = [ ρ0 + ρ1 y l (x)] + δ (x) , (1) unknown bias function δ(·) can be expressed through the prior
where ρ0 and ρ1 are two scaling parameters that define the distribution of (β, σ2, θ). The posterior distributions of these
parameters are quantified and propagated through the predictor.
‘scaled’ LF model ysl (x) = ρ0 + ρ1 y l (x) ; δ (⋅) is a bias In this work, a semi-Bayesian analysis is implemented while
function used to account for the discrepancy between the treating the correlation parameters θ unknown but fixed. With
such a treatment, by imposing appropriate priors, we can derive
‘scaled’ LF model ysl (x) and the HF model yh(x). yl(x) is
the close forms of the posterior process of the Gaussian process
simply regarded as either the original LF model or its surrogate model [8]. The hierarchical Bayesian modeling approach [24]
model yˆ l (x) . [25] is followed to choose the prior distributions of σ2 and β,
for example,
The scaling parameters ρ0 and ρ1 help bring (or ‘scale’) the LF σ 2 ∼ IG (2,1), β | σ 2 ∼ N (0, σ 2 I ), (4)
model as close as possible to the HF model. The application of where IG (⋅, ⋅) is the inverse Gamma distribution, N(·) is the
similar linear scaling approach could be found in previous
works, e.g., [3] and [7], even though the details of Bayesian normal distribution. Note such prior specifications can be used
modeling and its complexity vary. For determining the if no previous knowledge is available. Because the scaled LF
Bayesian modeling parameters, different approaches (e.g., model ysl (x) has been pulled as close as possible to the HF
MLE estimation [7], Cross Validation [21]) could be employed. simulation data, it is reasonable to assume the prior mean for
In this work, ρ̂0 and ρ̂1 are identified by the least square δ(·) as a zero (i.e., βT F (x) =0) , thus β is centered on zero in
(LS) method, together with bounds constrained on both ρ̂0 Eq. (4). Since we pose no uncertainty on ρ̂ , the resulting scaled
and ρ̂1 in Eq. (2):
LF model ρˆ y l (x) is viewed as deterministic. As a result, the

N l h 2
min L( ρ0 , ρ1 ) = [ ρ + ρ1 y (xi ) − y (xi )] , (2) surrogate model yˆ s (⋅) could be represented as the predictor or
i =1 0
st. l0 < ρ0 < u0 ; l1 < ρ1 < u1 the (posterior) mean
where L (ρ0, ρ1) stands for the loss function in a square sense;
yˆ s (x) = [ ρˆ 0 + ρˆ y l (x)] + δˆ (x) . (5)
xi(i=1…N) are the sampling points of HF model. The bounds
(l0, u0) and (l1, u1) posed on parameter ρ0 and ρ1 reflect, The interpolation uncertainty is only contributed by the
respectively, the prior belief of the global constant bias and interpolation uncertainty of the bias function δ(·), i.e.,
multiplicative scaling between HF and LF models. Due to the Var[ y s (⋅)] = Var[δ (⋅)] , (6)
use of bounds, we call our method the constrained linear

3 Copyright © 2007 by ASME


Cov[ y s (x), y s (x′)] = Cov[δ (x), δ (x′)] , (7)
To overcome these difficulties, we propose a periodical
where Var[·] and Cov[·,·] denote the variance and covariance,
switching criteria (PSC) strategy as depicted in Figure 2. The
respectively. The mathematical close forms of the mean,
sequence of the periodical alternation of criteria begins with an
variance, and covariance of the posterior process δ(·) are not
extreme global search ‘Max Var’, and ends up with an extreme
provided here but can be found in [8].
local search ‘Min Mean’, while using in-between three
2.3 Objective-Oriented Sequential Sampling compromising searches, i.e., ‘Min SLB’ with k=5.0, 3.0, and
Sequential sampling is one critical step in the variable fidelity 1.0. By applying the compromise strategy with ‘Min SLB’ in
optimization framework. Design decisions made via the the middle, sampling points are far less likely to cluster around
predictor (mean) of the Bayesian surrogate model can be the local minimum. This strategy is applied repeatedly
improved by reducing the interpolation uncertainty through the throughout the whole sequential sampling procedure. Our
inclusion of more data. The concept of Expected Improvement proposed strategy is good at exploring both the local and global
(EI) was first applied in the Efficient Global Optimization regions in a systematic manner, as demonstrated through
(EGO) algorithm proposed by Jones et al. [17]. The algorithm examples in later sections.
selects the next sampling site to maximize the expected Start Max Var
improvement of a design objective function with respect to the
interpolation uncertainty. The EI function is formulated as
Min SLB (k=5.0)
E[max{0,[ ymin − y s (x N +1 )]}] and further expanded as
ymin − µ y ymin − µ y Min SLB (k=3.0)
EI (x N +1 ) = [ ymin − µ y ]Φ ( ) + σ yφ ( ) , (8)
σy σy
Min SLB (k=1.0)
where ymin = yˆ (xi *) , xi* is the best design point based on the
current metamodel, y s (x N +1 ) stands for prediction (mean) Min Mean
from surrogate model at site x N +1 , µ y = yˆ s (x N +1 ) and Figure 2. The proposed periodical switching
criteria (PSC) strategy
σ y = Var1/ 2 [ y s (x N +1 )] . According to the study done in Sasena
2.4 Design Confidence and Stopping Criteria
[19] and Sekishiro et al. [27], although the EI method is Design confidence (DC), proposed in Chen et al. [8] for model
intended to balance between the local search and the global validation, is defined as a probabilistic measure [29] of a
search, it may not converge in some cases. Sometimes, it is chosen optimal design being better than other design choices
difficult to optimize the EI function because it may show with the consideration of model uncertainty. In the proposed
extremely ‘bumpy’ behavior at existing sampling points and variable fidelity optimization framework, design confidence
extremely ‘flat’ (close to zero) behavior in regions that are DC (x*) is defined as the probability of whether an optimal
largely inferior to the optimum.
design x* is the true optimum, in comparison with all designs
In this work, we examine the use of the Statistical Lower outside the indifferentiable region X0, considering the
Bounding (SLB) criterion [26,28], which has the following interpolation uncertainty of using a surrogate model y s (x) , i.e.,
form:
Minimize SLB(x N +1 ) ≡ µ y s (x N +1 ) − kσ y s (x N +1 ) , (9) x∉ X
{ }
DC (x*) = min0 P[ y s (x*) < y s (x)] = min0 { P[ Z x* (x) > 0]} , (12)
x∉ X

s 1/ 2 s where x* is identified by optimizing the predictor or mean of


where µ y s (x N +1 ) = yˆ (x N +1 ) , σ y (x N +1 ) = Var [ y (x N +1 )] ,
the surrogate model y s (x) , i.e,
and k is a user-defined parameter. A larger k value implies the
emphasis on reducing interpolation uncertainty or the need for
x
{
x* = arg min yˆ s (x) .} (13)
global search. The SLB criterion is easier to be controlled and
interpreted than the EI criterion. By assigning k =0 and k = +∞ , Z x* (x) = y s (x) − y s (x*) is assumed to follow the Gaussian
respectively, we get two extreme values of the SLB criterion: distribution with mean
Maximizing Var (x N +1 ) ≡ σ 2 y s (x N +1 ) (‘Max Var’), (10) µ Z x* (x) = yˆ s (x) − yˆ s (x*) , (14)
Minimizing Mean(x N +1 ) ≡ µ y s (x N +1 ) (‘Max Mean’). (11) and variance
‘Max Var’ focuses on the aspect of exploring regions with large σ Z2x* (x) = Var[ y s (x)] + Var[ y s (x*)] − Cov[ y s (x), y s (x*)] . (15)
interpolation uncertainty; ‘Min Mean’ focuses on the aspect of Figure 3 shows an illustrative example of the uncertainty of
locating local optima. In the work of Sasena [19], it was ys(x) at three design points, namely, x * , x a , and xb , where
proposed to alternate the two criteria under a subjective
x* is identified as the optimum based on the mean of
guideline. In the method described by Sekishiro et al. [27], two
samples, one based on ‘Max Var’ and the other on ‘Min Mean’, prediction. Correspondingly, the mean and variance of Z x* (x)
are used at each stage. One possible drawback common to these at the same three points are illustrated in Figure 4, where the
strategies is that consecutive sampling points may pile up at probabilistic distribution of Z x* (xi ) (xi = x*, x a , and xb ) is
extremes, causing ill-conditioning when calculating the inverse calculated from Eqs. (14) and (15). Note that Z x* (x) has zero
of the covariance matrix used in Gaussian process modeling.
mean and zero variance at x*.

4 Copyright © 2007 by ASME


ys(x) are aware of the validity, or goodness, of the achieved design to
decide if the design is acceptable or not.

yˆ s (x*) 3. NUMERICAL EXAMPLES


3.1 Example 1: Single Dimensional Problem
A single dimensional problem is first studied, with HF and LF
x models being artificially created for the illustrative purpose (see
xa x* xb Appendix for the mathematical equations). At the initial stage
s
(Stage 0), 3 uniform HF sampling points in region [0 1] are
Figure 3. The uncertainty of y (x) at x*, xa, and xb generated. Figure 5 shows the plots of the true HF model, the
LF model, the scaled LF model, and the 3 HF sampling points
P{|zx*(x)|<H }
(dark solid circles). Note there are two local minima of the HF
zx*(x) P{ zx*(x)>0 } model, and the LF model only roughly captures the general
trend of the HF model but provides a poor approximation. It is
H noted that the optimal design (x*LF= 0.9150, marked with *)
µ z x* (x*) = 0 H obtained from the LF model is a sub-optimal solution, located
in an area that is quite far from the true optimum (x*HF=
x 0.2307, marked with star) obtained from the HF model. After
scaling, the scaled LF model are pulled close to the 3 HF data
xa x* xb
points, with the scaling parameters estimated as [ ρˆ 0 ρˆ1 ] =
Figure 4. The uncertainty of zx*(x) at x*, xa, and xb
[0.1844 0.5371] based on Eq. (2). It should be noted that the
The assessment of DC in Eq. (12) involves the evaluation of values of [ ρˆ 0 ρˆ1 ] change over different stages after more
probability that x* is better than the most competing design x samples are added.
outside the indifferentiable region X0. The concept of Stage (0)

indifferentiable region X0 is introduced because, with model HF


uncertainty, designs with performance values close to x* should 1
LF

be considered as equally good. More strictly, X0 is defined as a Scaled LF


HF samples
region within which the design points are claimed indifferent to HF optimum

x* for a given design tolerance H with a confidence level 0.5 LF optimum

C X 0 =1- α (e.g. α =0.05 or 0.10), as follows,

{ }.
y

X 0 ≡ x P [| Z x* (x) |< H ] > C X0 (16) 0

Note that X0 may include not only the close neighborhood of


x*, but also regions not adjacent to x*. The evaluation of Eq. -0.5

(12) requires the search of the most competing design w.r.t. the
optimal design x*, called the most competing (MC) point xmc,
by minimizing the probability P and treating the -1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
indifferentiable region X0 as the constraint. The obtained xmc is x

then used to calculate the design confidence of x* through Figure 5. The HF model and LF model, with 3 initial HF
DC (x*) = P[ Z x* (x mc ) > 0] . (17) samples (Example 1)
For sequential sampling in global optimization, the commonly
used stopping criteria are based on the convergence behavior in Different from using a single fidelity model, in variable fidelity
either design space of x, or performance space of y [3] [27]. optimization, very few data from HF simulations are available,
Although generally applicable, none of them provide thus LF model may be used to capture the global trend of a HF
probabilistic measure regarding the validity of an optimal model. We note in Figure 5 that the scaled LF model is fairly
design considering model uncertainty. In this work, we view close to the HF model. The trend information provided by the
the sequential sampling as a process of reducing the scaled LF model is integrated into the Bayesian surrogate
interpolation uncertainty of surrogate models, as well as model to enhance the accuracy of prediction.
improving the confidence in accepting a design solution. We
propose to use the stopping criterion based on the design From Stages 1 to 5, additional 5 sampling points are
confidence values of any two consecutive stages. If any two sequentially collected following the PSC strategy we proposed
consecutive design confidences DC(x*) meet (is higher than) a in Section 2.3. Using the Bayesian Gaussian process modeling
desired confidence level (e.g., 90%, 95%) prescribed by approach described in Section 2.2, the posterior mean and the
designers, the sequential sampling process can then be interpolation uncertainty (95% prediction interval (PI)) of
terminated. Often times, once resources (time and cost, etc) are y s (x) are shown in Figure 6, with all 5 sequential points
exhausted, designers would have to accept the current best annotated with the stage number besides. It is noted that
design. With the information of design confidence, designers although two local minimums exist in the HF model, only one
of the 5 samples is used to explore the secondary local

5 Copyright © 2007 by ASME


minimum region, the other four are all located around the the true optimal design (x*HF= [0.0970, 0.9344], marked with a
global minimum. Therefore the created surrogate model is star).
much more accurate in the neighborhood of the global
minimum than that of the secondary local minimum. In Stages Y HF
4 and 5, the sequential sampling points are very close to the Y LF
global minimum. The sequential sampling process is objective- 300
oriented, addressing both needs of global search and local 200
search.

y
100
In Figure 6, points in the local region (0.215~0.253) 0 1
surrounding x* (=0.2330) is identified as indifferentiable to x*
0 0.5
with certain design tolerance H (=0.023) and confidence 0.5
1 0
C X 0 (=99%). The design point x=0.746, shown with large x1
x2

interpolation uncertainty, is identified as the most competing Figure 7. The 3-D plots of HF and LF models
point xmc (marked with triangle). The xmc point is considered as
1 15
the most competing point to x* among all design points outside 0 0
15 3 0

70
10
90 170gloabl optimum

30
HF 1
the indifferentiable region X0 because its lower bound of the

-1 0
0.9 13 0
0 HF local optimum 11
90
prediction interval is very close to the performance at x*. 11

10
0.8 50 0
Stage (5) 70 90

-1 0
30
0.7
Ys hat 30 70 50
95% PI 0.6
30

10
1 50
initial HF samples
0.5

50
x2
sequential HF samples
10
x star from Ys

11 0
7090
0.4

10
10

30
most competing point
0.5

30

10
0.3
Ys hat

0.2

15 01711900
50
0

23 0 25
0.1

2
mc

0
x

13

30
70 0
11

10
10
0
0
3

9
1 0

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2 x1
-0.5
5 4
Figure 8. The contour plot of HF model marked with x*HF
=[0.0912 0.9325], yh(x*HF)=-19.142.
-1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Stage (0)
x 1
Initial HF samples

Figure 6. The plots of yˆ s (x) and the HF samples (Stage 5)


0.9 x star from HF
x star from Ys
0.8 Indifferentiable region
3.2 Example 2: Two Dimensional Problem 1 (the Modified 0.7
most competing point

Branin Function)
0.6
In this example, the effectiveness of the proposed sequential
0.5
x2

approach is demonstrated through a modified Branin function.


Optimizing the Branin function [31] is challenging because it 0.4

has three global minima with exactly equal performance values. 0.3
The problem has been studied in literature for various purposes. 0.2
In this work we modified the original Branin function by
0.1
adding an additional small ‘tip’ term so that it has only one
global minimum, while the other two become local minima but 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
stay competing to the global one. The modified Branin function x1

is used as the HF model, while we hypothetically construct the


Figure 9. The plot of yˆ s (x) at Stage 0 (5 points) with x*=
LF model (see in Appendix for mathematical details). The 3-D
plots of the HF and the LF models are shown in Figure 7, and [0.694, 0.001]
the contour plot of the HF model with the global optimal point
indicated as x*HF is shown in Figure 8, with the two local In order to demonstrate the effectiveness of the proposed PSC
minima marked with * and the global minimum marked with a strategy in the sequential sampling, we compare the result from
star. Note the ranges of x1 and x2 are normalized to 0~1. using the EI criterion with that from the PSC strategy using the
same amount of sampling data. The plots of the resulted
At the initial stage (Stage 0)(see Figure 9), only 5 sampling surrogate models from EI and PSC at Stage 10 (i.e., after 10
points are available which are generated with the Optimal sequential points are added) are shown in Figure 10 and Figure
Latin-Hypercube (OLH) algorithm [32]. The contour plot of the 11, respectively for comparison.
obtained surrogate model at this stage is shown in Figure 9. It is
clear that the optimal design (x*= [0.6094, 0.3012], marked In Figure 10, where the EI is applied, it is found that the
with a square) from the surrogate model is very different from optimal design x* (marked by square) is erroneously identified

6 Copyright © 2007 by ASME


at a sub-optimal region. The number marked beside each depicted by a collection of ‘+’ markers. xmc and X0 are
sequential point indicates the stage they belong to. It is found determined based on the definitions given in the previous
that the sequential points via EI criterion (marked by solid section. In essence, with the consideration of model
circles) fails to discover the region sufficiently before it uncertainty, the designs in the indifferentiable region are
converges to the local minimum. Similar observations about the considered as equivalent to the optimal x* within certain
shortcomings of the EI criterion are also made in [19] and [27]. tolerance. The design tolerance value H selected for this
Stage (10) example is 12, which is about 3% of the range of y. Note that in
1
1 Initial HF samples
Figure 10, since the local sub-optimal regions have similar y
0.9 Sequential HF samples values, the indifferentiable region X0 w.r.t. x* are located in
x star from HF
0.8
x star from Ys
both the neighborhood of x* and a disjoint region centered
0.7
Indifferentiable region around the other sub-optimal region extreme. In Figure 11,
most competing point
since x* is already correctly identified within the true global
0.6
optimal region, X0 is a continuous region surrounding x*. The
0.5 most competing point xmc w.r.t. x* is marked with a triangle in
x2

0.4 2 both Figures 10 and 11. It is noted that xmc is always located
0.3 54
outside of X0 and represents the most competing design w.r.t
8 10 x*. Based on Eqs. (17) and (18), the design confidence DC(x*)
0.2 6
9 3 achieved using the EI criterion and our proposed PSC strategy
0.1
7
is 87.12% and 99.99%, respectively.
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 15
x1 PSC strategy
10 EI criteria
s
Figure 10. The plot of yˆ (x) at Stage 10, with x*= [0.5276, One-shot sampling
ymin from HF
5
0.2132]: sequential sampling points generated by EI criterion
Stage (10) 0

ymin(k) from Ys
1
2 1
0.9 10 -5
4 Initial HF samples
0.8 5 Sequential HF samples
-10
x star from HF
0.7 x star from Ys
3 -15
Indifferentiable region
0.6
most competing point
-20
0.5
x2

8
0.4 -25
0 1 2 3 4 5 6 7 8 9 10
Stage k
0.3
9 7
0.2 Figure 12. The comparison of the history plots of y*
0.1 1
6 PSC strategy
0 0.95 EI criteria
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
One-shot sampling
x1
0.9

Figure 11. The plot of yˆ s (x) at Stage 10, with x*= [0.1011, 0.85

0.9156]: sequential sampling points generated by PSC strategy 0.8


DC

0.75
In Figure 11, where the PSC is applied, x* (marked by square)
0.7
is identified at [0.1011, 0.9156], fairly close to x*HF. From the
locations of the sequential points (marked by solid circles), it is 0.65

found that most of the sampling points are placed in the local 0.6
region around the global minimum, while the rest of them are 0.55
placed elsewhere to reduce the interpolation uncertainty of the
surrogate model. It is found that at the early stages when the 0.5
0 1 2 3 4 5 6 7 8 9 10
uncertainty of the surrogate model is large, the sampling Stage k

procedure explores the model space more rather than focusing Figure 13. The comparison of the history plots of DC(x*)
on any local promising region. After sufficient samples have (H = 12 (3% of yrange) and C X 0 = 0.95)
been accumulated and the uncertainty of a surrogate model is
reduced, more samples are used for local refinement of the
global surrogate model in the region that is in favor of the Figures 12 and 13 show the history plots of the response value
design objective. y* and design confidence value DC(x*) from the surrogate
models at different stages. The results from using the EI
The most competing point (xmc) w.r.t. the optimal x* obtained criterion and the PSC strategy are compared in both figures. It
from the surrogate model, is marked as a triangle in both plots is observed that the DC level consistently increases with more
in Figures 10 and 11, where the indifferentiable region X0 is sample points when using the PSC strategy which yields a high

7 Copyright © 2007 by ASME


DC level (close to 100%) in Stages 9 and 10, much better than simulated using ADAMS/Flex, a finite element based multi-
that (87.4%) from the EI with the same amount of sampling body dynamics code. In this study, four geometric parameters
points. Besides, in terms of y*, from the true y*HF indicated by are considered as the design variables (see Table 1): skirt length
the horizontal dashed line in Figure 13, it is observed that the (SL), skirt profile (SP), skirt ovality (SO), and pin offset (PO),
PSC generates a more accurate value than the EI. These facts while the performance of interest is the slap noise (SN). The
imply that the proposed PSC strategy holds much advantage HF and LF models we use are two Kriging models based on,
over the EI approach. respectively, a larger set (200) of data and a smaller set (20) of
data. Using the HF model output to validate the LF model, the
From the history plot of y* in Figure 13, it is found that y* R-Square value of the LF model is obtained as 61.60%,
from both EI and PSC appears to be stabilized after a few indicating the LF model is not a satisfactory approximation to
stages. However, only examining the history of y* is not the HF model.
sufficient in determining if the sequential process should be
terminated, because the DC level could still be low like the case Table 1. Design variable in the Engine Piston design
with using the EI method. This implies that the use of the Variable Description Lower Upper Unit
design confidence (DC) as a termination criterion is more Bound Bound
effective which offers more information for design decision SL Skirt Length 21 25 mm
making than simply examining the convergence behavior of y*. SP Skirt Profile 1 3 N/A
1
Stage (0) SO Skirt Ovality 1 3 N/A
Initial HF samples PO Pin Offset 0.5 1.3 mm
0.9 x star from HF
x star from Ys
0.8
Indifferentiable region At Stage 0, we create 10 initial sampling points using the
0.7
most competing point
space-filling criterion (OLH). During the following 5 stages,
0.6
additional 5 sampling points are collected using the proposed
PSC strategy.
0.5
x2

1
0.4
0.95
0.3
0.9 Setting A: H=0.24 Cx0=0.95
0.2
Setting B: H=0.08 Cx0=0.99
0.85
0.1
0.8
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
DC

x1 0.75

Figure 14. The plots of yˆ s (x) with x*= [1.0000, 0.2103]


0.7

0.65
(One-shot sampling of 5+10=15 points)
0.6

To demonstrate the advantage of using sequential sampling 0.55


over a one-shot (single stage) sampling, we generate the same 0.5
amount of data (5+10=15 points) using the space-filling 0 1 2 3 4 5
Stage k
6 7 8 9 10

criterion (OLH) as a comparison to the sequential sampling


above. Figure 14 shows the settings of the evenly-spaced 15 Figure 15. History plot of DC(x*)
points and the contour plot of the surrogate model yˆ s (x) built Figure 15 shows the history plot of DC(x*) measured with two
using the HF data. Even though this surrogate model might be settings of values of design tolerance (H) and confidence of
more accurate in a global sense than those built based on indifferentiable region ( C X 0 ), namely, Setting A (H=0.24,
sequential sampling, the model fails to capture the local details
of the three local minimum regions, and the optimal design C X 0 =95%) and Setting B (H=0.08, C X 0 =99%). Note Setting
x*=[1.000, 0.2103] (marked by square) is erroneously B is much stricter than Setting A. Stricter H and C X 0 usually
identified at the local minimum region far away from the true
global minimum. It is found that DC(x*) (=72.81%) achieved reflect the higher expectation from designers about the
by the one-shot sampling is lower than the sequential sampling accuracy and validity of the optimal solution. From Figure 15,
by both the EI criterion and the PSC strategy. it is found that Setting B always achieves smaller DC values
from Stage 0 to Stage 4. However, the two DC curves begin to
3.3 Example 3: Engine Piston Design – An Engineering ‘merge’ after Stage 4 and stay above 99% in all later stages. If
Design Example we assume the desired confidence level prescribed by designers
We use the vehicle engine piston design case study previously are 95% for Setting A, and 99% for Setting B, applying the DC
analyzed in [33] as an illustrative engineering example. The based stopping criteria described in Section 2.4, the sequential
design goal of this problem is to optimize the geometry of the process could be terminated at Stage 5 for Setting A and Stage
engine piston to obtain the minimal piston slap noise (measured 6 for Setting B after the DC values of two consecutive stages
in dB). Technical background of the engine piston design are above the specified level. This example shows that
problem can be found in [34]. Piston slap noise is the engine designers’ preference does have an impact on the decision of
noise resulting from piston secondary motion, which can be whether a solution can be accepted with sufficient confidence.

8 Copyright © 2007 by ASME


approximation accuracy compared to only using HF simulation
3.4 Comparison of Surrogate Models from Different Model data or combining it with the LF model without scaling. With
Fusion Approaches Gaussian process modeling, interpolation uncertainty due to the
In our proposed model fusion approach as described in Section lack of sufficient HF simulations can also be quantified. As a
2.2, the constrained linear scaling (CLS) method is applied first part of the variability fidelity optimization framework, an
to correct the LF model, before using a bias function that objective-oriented sequential sampling strategy is investigated.
accounts for the remaining discrepancy based on Bayesian Compared with the Expected Improvement approach, the
modeling. To show the effectiveness of the approach, adopted periodical switching criteria (PSC) based strategy for
comparative results of different approaches applied over all sequential sampling achieves a better balance between
three examples discussed above are provided in Table 2. The optimizing the design objective and reducing the interpolation
Root Mean Square Error (RMSE) is utilized as the accuracy uncertainty. We also propose to use design confidence (DC), a
metric: the smaller RMSE the higher accuracy. Note that all the probabilistic measure of the confidence in employing the
sampling points are created in an one-shot space-filling surrogate model for making a specific design choice, as the
criterion (OLH), rather than with the sequential sampling. For stopping criterion for the proposed framework. This proposed
the same problem with the same different settings of sampling approach effectively facilitates decision making in engineering
sizes, the RMSE values from three modeling approaches, design, by taking into account the uncertainty associated with
namely, the ‘non-fusion’ approach, the ‘non-scaled’ approach, the use of surrogate models. Through examples, we show that
and the proposed CLS approach are compared. By ‘non- designers’ preference has an impact on the decision of whether
fusion’, we mean that the information from the LF model is not a solution can be accepted with sufficient confidence.
considered at all for constructing the surrogate model. The only
difference between the ‘non-scaled’ approach and the ‘CLS’ In this paper, we only work on examples of unconstrained
approach is that the former does not scale LF model before optimization. The methodology could be extended to more
applying the bias function. general constrained optimization problems, which would be
one future research topic. Another potential application of the
In Table 2, the best RMSE values are marked in bold. From proposed framework is to extend the approach to design
these data, we note that the ‘non-fusion’ approach without optimization using information from both computer simulations
exploiting the LF models always rank the worst among the and physical experiments, while the former and later will play
three with remarkably large RMSE values, which implies that a the similar roles of LF model and HF data, respectively.
surrogate model based on the fusion of LF and HF models is
much superior over a surrogate model based on only HF model. ACKNOWLEDGMENTS
Compared with the ‘non-scaled’ approaches, the CLS approach The grant support (DMI – 0522662) from National Science
we propose also yields higher accuracy in most of comparisons. Foundation (NSF) to this research is greatly appreciated.
Table 2. The accuracy (RMSE) comparison APPENDIX
Example 1 (1-D)
Sampling size
Example 1: y1h ( x) = 0.5sin[4π sin( x + 0.5)] + ( x + 0.5) 2 / 3 ,
5 6 7 8 10
non-fusion 0.326 0.297 0.313 0.336 0.338 y1l ( x) = y1h (1.1x − 0.1) − 0.2 , x ∈ [0,1]
non-scaled 0.221 0.114 0.060 0.044 0.003 Example 2: y2h ( x1, x2 ) = ybranin ( x1, x2 ) − 22.5 x2 ,
CLS 0.286 0.112 0.052 0.036 0.003
Example 2 (2-D, Modified Branin function) where, ybranin ( x1, x2 ) = 10 + [ x2 − 5.1x12 /(4π 2 ) + 5 /(π x1) − 6]2
Sampling size 7 8 9 10 15 +10cos( x1)[1 − 1/(8π )]
non-fusion 85.450 92.829 79.249 91.675 82.18 y2l ( x1, x2 ) = y2h (0.7 x1,0.7 x2 ) + 20 ⋅ (0.8 + x1 )3 − 50 , x1 ∈ [−5,10]
non-scaled 12.673 8.775 8.787 7.555 6.544 x2 ∈ [0,15]
CLS 8.939 7.074 9.186 7.358 2.350
Example 3 (4-D, Engine Piston problem)
REFERENCES
Sampling size 10 15 20 30 50 [1] Alexandrov, N. M., Lewis, R. M., Gumbert, C. R., Green,
non-fusion 55.475 55.365 55.406 55.409 55.415 L. L., and Newman, P. A., 1999, "Optimization with
non-scaled 0.026 0.022 0.023 0.014 0.011 variable-fidelity models applied to wing design," The 38th
CLS 0.024 0.036 0.022 0.020 0.011 AIAA Aerospace Sciences Meeting and Exhibit,
January10–13, 2000, Reno, NV.
4. CONCLUSION [2] Rodriguez, J. F., Pérez, V. M., Padmanabhan, D., and
Renaud, J. E., 2001, "Sequential Approximate
In this work, we propose a new variable fidelity optimization
Optimization Using Variable Fidelity Response Surface
approach from the perspective of reducing the uncertainty of
Approximations," Structural and Multidisciplinary
using surrogate models in engineering design. To effectively
Optimization, 22, 24-34.
incorporate the information from both HF and LF models, a
[3] Gano, S. E., Renuad, J. E., and Sanders, B., 2004,
model fusion approach based on linear scaling and Bayesian
"Variable Fidelity Optimization Using a Kriging Based
Gaussian process modeling is proposed. It has been shown
Scaling Function," 10th AIAA/ISSMO Multidisciplinary
through examples that the Bayesian surrogate model obtained
Analysis and Optimization Conference, 30 August - 1
from the proposed model fusion approach has a better
September 2004, Albany, New York.

9 Copyright © 2007 by ASME


[4] Panchal, J. H., Choi, H.-J., Shepherd, J., Allen, J. K., [18] Mockus, J., Tiesis, V., and Zilinskas, A., 1978, "The
McDowell, D. L., and Mistree, F., 2005, "A Strategy for application of Bayesian Methods for Seeking the
Simulation-Based Design of Multiscale, Multi-Functional Extremum," Towards Global Optimisation, 2, 117-129.
Products and Associated Design Processes," ASME 2005 [19]Sasena, M. J., 2002, "Flexibility and Efficiency
IDETC/CIE Conference, Long Beach, CA. Enhancements for Constrained Global Design
[5] Liu, W. K., Karpov, E. G., and Park, H. S., 2005, Nano- Optimization with Kriging Approximations," University of
Mechanics and Materials: Theory, Multiscale Methods and Michigan.
Applications, Wiley, New York. [20]Bandler, J. W., Cheng, Q. S., Dakroury, S. A., Mohamed,
[6] Gano, S. E., Renaud, J. E., Martin, J. D., and Simpson, T. A. S., Bakr, M. H., Madsen, K., and Sondergaard, J., 2004,
W., 2006, "Update strategies for kriging models used in "Space mapping: the state of the art," IEEE Transactions
variable fidelity optimization," Structural and on Microwave Theory and Techniques, 52(1), 337-361.
Multidisciplinary Optimization, 32(4), 287-298. [21]Hastie, T., Tibshirani, R., and Friedman, J., 2001, The
[7] Qian, Z., Seepersad, C. C., Joseph, V. R., Allen, J. K., and Elements of Statistical Learning, Data Mining, Inference
Wu, C. F. J., 2006, "Building Surrogate Models Based on and Prediction, Springer, New York.
Detailed and Approximate Simulations," Journal of [22]Smith, R. L., and Zhu, Z., 2004, "Asymptotic Theory for
Mechanical Design, 128, 668. Kriging with Estimated Parameters and Its Application to
[8] Chen, W., Xiong, Y., Tsui, K.-L., and Wang, S., 2006, Network Design," Preliminary version, available from
"Some Metrics and a Bayesian Procedure for Validating http://www.stat.unc.edu/postscript/rs/supp5.pdf.
Predictive Models in Engineering Design," ASME Design [23]Li, R., and Sudjianto, A., 2003, "Analysis of Computer
Technical Conference, Design Automation Conference, Experiments Using Penalized Likelihood Gaussian Kriging
September 10-13, 2006, Philadelphia, PA. Model," ASME International DETC &CIE, September,
[9] Wang, S., Chen, W., and Tsui, K., 2006, "Bayesian Chicago, IL.
Validation of Computer Models," submitted to [24]Santner, T. J., Williams, B. J., and Notz, W. I., 2003, The
Technometrics. Design and Analysis of Computer Experiments, Springer-
[10]Apley, D., Liu, J., and Chen, W., 2006, "Understanding the Verlag, New York.
Effects of Model Uncertainty in Robust Design with [25]Qian, Z., and Wu, C. F. J., 2005, "Bayesian Hierarchical
Computer Experiments," ASME Journal of Mechanical Modeling for Integrating Low-Accuracy and High
Design, 128(945), 745-958. Accuracy Experiments," Georgia Institue of Technology.
[11]Sacks, J., Welch, W. J., Mitchell, T. J., and Wynn, H. P., [26]Jones, D. R., 2001, "A Taxonomy of Global Optimization
1989, "Design and Analysis of Computer Experiments," Methods Based on Response Surfaces," Journal of Global
Statistical Science, 4(4), 409-423. Optimization 21(4), 345-383.
[12]Martin, J. D., and Simpson, T. W., 2002, "Use of Adaptive [27]Sekishiro, M., Venter, G., and Balabanov, V., 2006,
Metamodeling for Design Optimization," 9th AIAA/ISSMO "Combined Kriging and Gradient-Based Optimization
Symposium on Multidisciplinary Analysis and Method," 11th AIAA/ISSMO Multidisciplinary Analysis
Optimization, Atlanta, GA. and Optimization Conference, September 6 - 8,
[13]Turner, C. J., Campbell, M. I., and Crawford, R. H., 2003, Portsmouth, Virginia.
"Generic sequential sampling for meta-model [28]Cox, D. D., and John, S., 1997, "SDO: A Statistical Method
approximations," ASME 2003 Design Engineering for Global Optimization " Multidisciplinary design
Technical Conferences and Computers and Information in optimization: state of the art, SIAM, Philadephia, 315-329.
Engineering Conference, September 2-6, 2003, Chicago, [29]Hazelrigg, G. A., 2003, "Thoughts on Model Validation for
Illinois. Engineering Design," ASME Design Technical Conference,
[14]Jin, R., Chen, W., and Sudjianto, A., 2002, "On Sequential Sept. 2-6, 2003, Chicago, IL.
Sampling for Global Metamodeling in Engineering [30]Nocedal, J., and Wright, S. J., 1999, Numerical
Design," The 2002 ASME IDETC Conferences, September Optimization, Springer Verlag.
29-October 2, Montreal, Quebec, Canada. [31]Leary, S. J., Bhaskar, A., and Keane, A. J., 2004, "A
[15]Lin, Y., Mistree, F., and Allen, J. K., 2004, "A Sequential Derivative Based Surrogate Model for Approximating and
Exploratory Experimental Design Method: Development Optimizing the Output of an Expensive Computer
of Appropriate Empirical Models in Design," ASME 2004 Simulation," Journal of Global Optimization, 30, 39-58.
Design Engineering Technical Conferences and [32]Jin, R., Chen, W., and Sudjianto, A., 2005, "An Efficient
Computers and Information in Engineering Conference, Algorithm for Constructing Optimal Design of Computer
September 28-October 2, Salt Lake City, Utah. Experiments," Journal of Statistical Planning and
[16]Farhang-Mehr, A., and Azarm, S., 2005, "Bayesian Meta- Inference, 134(1), 268-287.
Modeling of Engineering Design Simulations: A [33]Jin, R., 2005, "Enhancements of Metamodeling Techniques
Sequential Approach with Adaptation to Irregularities in on Engineering Design," University of Illinois Chicago.
the Response Behavior," International Journal for [34]Hoffman, R. M., Sudjianto, A., Du, X., and Stout, J., 2003,
Numerical Methods in Engineering, 62(15), 2104-2126. "Robust Piston Design and Optimization Using Piston
[17]Jones, D. R., Schonlau, M., and Welch, W. J., 1998, Secondary Motion Analysis," SAE World Congress, Mar.
"Efficient global optimization of expensive black-box 3-6, 2003, Detroit, Michigan.
functions," Journal of Global Optimization, 13(4), 455-
492.

10 Copyright © 2007 by ASME

View publication stats

You might also like