You are on page 1of 13

Reliability Engineering and System Safety 215 (2021) 107848

Contents lists available at ScienceDirect

Reliability Engineering and System Safety


journal homepage: www.elsevier.com/locate/ress

On dimensionality reduction via partial least squares for Kriging-based


reliability analysis with active learning
Lavi Rizki Zuhal a , Ghifari Adam Faza a , Pramudita Satria Palar a ,∗, Rhea Patricia Liem b
a
Faculty of Mechanical and Aerospace Engineering, Institut Teknologi Bandung, West Java, Indonesia
b
Department of Mechanical and Aerospace Engineering, The Hong Kong University of Science and Technology, Hong Kong Special Administrative Region

ARTICLE INFO ABSTRACT

Keywords: Kriging with active learning has been widely employed to calculate the failure probability of a problem with
Reliability analysis random inputs. Training a Kriging model for a high-dimensional problem is computationally expensive. This
Kriging reduces the efficiency of active learning strategies since the training cost becomes comparable to that of the
Active learning
function evaluation itself. Kriging with partial least squares (KPLS) has the advantage of a fast training time but
Partial least squares
its efficacy on high-dimensional reliability analysis has not yet been properly investigated. In this paper, we
Dimensionality reduction
assess the potential benefits of KPLS for solving high-dimensional reliability analysis problems. This research
aims to identify the potential advantages of KPLS and characterize the problem domain where KPLS can be
most efficient and accurate in estimating the failure probability. Tests on a set of benchmark problems with
various dimensionalities reveal that KPLS with four principal components significantly reduces the CPU time
compared to the ordinary Kriging while still achieving accurate failure probability. In some problems, it is also
observed that KPLS with four principal components reduces the number of function evaluations, which will
be beneficial for problems with expensive function evaluations. Using too few principal components, however,
does not show any evident improvements over ordinary Kriging.

1. Introduction (FORM) [5] and the second-order reliability method (SORM) [6] con-
struct a local approximation of the limit state function. However, their
Reliability analysis aims to measure the safety of an engineering sys- applications in real-world problems are limited because they are not
tem under the presence of uncertainties. Typical quantities of interest capable of handling non-linear limit state functions.
(QOI) in reliability analysis are the probability of failure and quantiles, Although we can use any surrogate models to solve reliability anal-
which are typically estimated through computer simulations. These ysis problems, some methods are more popular than others. Most early
quantities are also relevant in reliability-based optimizations, which references focus mainly on quadratic polynomial regression, which
aim to discover optimal solutions that satisfy the constraint under var- cannot accurately fit a nonlinear response. To fit non-linear limit state
ious possible random conditions. Monte Carlo simulation (MCS) [1] is functions, researchers have been using neural network [7,8], support
a popular method to calculate the QOIs due to its simplicity. However, vector machine [9,10], and Kriging [11]. In this particular application,
the large number of required simulation calls can make this process the surrogate model needs to be primarily accurate in the proximity of
computationally expensive. Several methods have been introduced to
the limit state function instead of in the entire domain. This consider-
address this issue. Techniques such as importance sampling [2] and
ation has led to the development of adaptive sampling techniques that
line sampling [3] emphasize the generation of samples close to the
refine the surrogate model at locations close to the limit state. To that
limit state while subset simulation calculates the failure probability as
end, it is beneficial to possess information on the uncertainty of the
a product of intermediate failure events [4]. However, such variance
surrogate model to enhance the process of adaptive sampling.
reduction methods still encounter difficulty in cases with computation-
Kriging has an intrinsic error estimation function which makes
ally expensive computer simulations, e.g., those involving expensive
computational fluid dynamics (CFD) or finite element method (FEM) it attractive for reliability analysis. Several active learning strategies
solvers. To that end, a common approach is to employ a surrogate have been proposed by taking advantage of such information, such
model that approximates the relationship between the input and the as the efficient global reliability analysis (EGRA) [12] and the active
output of a computer simulation. The first-order reliability method learning reliability method using Kriging and MCS (AK-MCS) [13].

∗ Corresponding author.
E-mail addresses: lavirz@ae.itb.ac.id (L.R. Zuhal), pramsp@ftmd.itb.ac.id (P.S. Palar).

https://doi.org/10.1016/j.ress.2021.107848
Received 30 October 2020; Received in revised form 30 April 2021; Accepted 2 June 2021
Available online 9 June 2021
0951-8320/© 2021 Elsevier Ltd. All rights reserved.
L.R. Zuhal et al. Reliability Engineering and System Safety 215 (2021) 107848

The learning function (or infill criteria) is an important part of active the original variables. Thus, the dimensionality reduction is primarily
learning strategies; several learning functions that can be found in performed on the hyperparameter training side instead of by truncating
the literature include the U-function [13], least improvement func- the number of variables. PLS is also more informative than PCA since
tion [14], and reliability-based expected improvement function [15]. the latter only depends on the input information. Furthermore, most
These methods have also been extended to deal with multiple fail- importantly, using KPLS can potentially reduce the overall CPU time
ure modes, such as the extension of AK-MCS for system reliability of active learning, which is the main objective of this paper. KPLS
analysis AK-SYS [16], improved AK-SYS [17], autocorrelated Krig- has been used in a general approximation context and efficient global
ing [18], and the active Kriging with truncated candidate region [19]. optimization [38], but its implementation in the context of reliability
Active learning strategies can also be employed for reliability analysis
analysis has not yet been sufficiently explored and assessed. In particu-
in the context of reliability-based optimization [20,21]. A recent re-
lar, we need to assess the effectiveness of KPLS in capturing the tail of
view of surrogate-based reliability-based optimization can be found in
the distribution to obtain a good estimate of the failure probability,
Moustapha et al. [22]. Besides Kriging, other surrogate models with er-
which is specific to reliability analysis studies. The objective of this
ror estimates such as bootstrapped polynomial chaos expansion [23] or
paper is then to assess the potential benefits and also pitfalls of using
even the ensemble of models [24] can also be used for active learning-
based reliability analysis. Besides, AK-MCS has also been modified to KPLS when coupled with an active learning strategy on solving high-
deal with extremely small failure probability [25]. However, in this dimensional reliability analysis problems. For this study, we use the
paper, the main focus is to solve problems with small failure probability general framework of AK-MCS [13] for the KPLS implementation. Over-
and not extremely small value. all, the use of KPLS on AK-MCS aims to (1) reduce the total CPU time
One particular challenge in reliability analysis is to accurately esti- and (2) reduce the number of function evaluations for convergence. We
mate the failure probability of problems with high-dimensional input perform experiments on a set of benchmark problems to gain insight
space. The difficulties stem primarily from two aspects, namely, a into KPLS performance in this specific context.
high-dimensional limit state function and the high computational time We organized the rest of this paper as follows: Section 2 briefly
required to train surrogate models with an active learning setting. introduces the reliability analysis problem and the research motivation
There exist some non-adaptive methods for high-dimensional reliability of this paper; Section 3 explains the Kriging model, the AK-MCS algo-
analysis such as the maximum entropy method [26] and the subset rithm, and the implementation of the KPLS method within AK-MCS;
simulation support vector machines (SS-SVM) [27]. The issue with Section 4 discusses the computational results on a set of benchmark
high training cost is particularly relevant for Kriging, especially when problems; finally, Section 5 concludes this paper with pointers on
we have a large number of sampling points in a high-dimensional
possible future works.
space [28]. In particular, multiple inversions of the covariance matrix
contribute to the high cost in hyperparameter optimization [29]. When
coupled with an active learning procedure, the Kriging model needs 2. Reliability analysis
to be constructed several times and this adds the overall time of the
calculation of the QOIs. The significant training time of Kriging can
2.1. Formulation
reduce the benefits offered by the active learning strategy especially
when the cost is comparable to or even more expensive than that of the
function evaluation. However, despite its importance, the CPU time of We consider an 𝑚-dimensional random vector 𝝃 = {𝜉1 , 𝜉2 , … , 𝜉𝑚 }𝑇
Kriging with active learning is rarely reported in the literature. There with a probability density function (PDF) measure 𝝆(𝝃) where each
exist some methods such as local Kriging [30,31] that were specifically component of 𝝃 is assumed to be independent to each other. Conse-

developed to accelerate the training time of Kriging, but these methods quently, one gets 𝝆(𝝃) = 𝑚 𝑖=1 𝜌𝜉𝑖 (𝜉𝑖 ), where 𝜌𝜉𝑖 (𝜉𝑖 ) is the marginal PDF
are only applicable with large data sets. of variable 𝜉𝑖 . The limit state function, 𝑦 = 𝐺(𝝃), is essentially the output
One way to reduce Kriging training time is by simplifying the corresponding to random inputs 𝝃. We are particularly interested in
model structure and reducing the ‘‘effective’’ problem dimension. A computing the probability of failure, defined as
simple approach is by assuming isotropicity, where the same length-
scale value is assigned to all inputs, at the expense of approximation 𝑃𝑓 = 𝐼𝐹 (𝝃)𝝆(𝝃)d𝝆 = E(𝐼𝐹 ), (1)
∫𝜴
accuracy [32]. Another approach is to employ the active subspace
method [33] which detects a low-dimensional subspace with the high- where 𝜴 is the domain of integration, 𝐼𝐹 (𝝃) = {0 if 𝐺(𝝃) > 0 and 1
est variance. Jiang et al. implemented this approach to perform relia- if 𝐺(𝝃) ≤ 0}. 𝐺(𝝃) usually defines the safety measure. As an example,
bility analysis [34]. However, the maximum dimensionality considered the limit state function can be the Von Mises stress in the context of
by Jiang et al. was only 20 and it is worth noting that the active sub- structural design or the average temperature in a heat transfer problem.
space method required gradient information for high accuracy. Other Typically, a computer simulation (e.g., CFD or FEM) is used to calculate
dimensionality reduction techniques that have been applied to Kriging the limit state function. The values of 𝑃𝑓 range between 0 and 1. In the
include sliced inverse regression [35] and kernel principal component reliability analysis context, however, we focus mainly on computing a
analysis (PCA) [36]; these methods aimed to reduce the approximation much smaller range of 𝑃𝑓 (e.g. 𝑃𝑓 < 0.1). In this respect, the quantity
error of Kriging. Another alternative is to combine partial least squares E(𝐼𝐹 ) can be interpreted as the integral of the tail of the distribution
(PLS) [37] with Kriging; the dimensionality reduction feature of PLS
𝝆(𝝃). 𝑃𝑓 is typically calculated by the MCS method using a random 𝑛𝑚𝑐𝑠
can help reduce the training time of Kriging [29].
samples generated according to 𝝆(𝝃), reads as
In this paper, we focus on reducing the surrogate modeling training
∑𝑛𝑚𝑐𝑠
time by combining Kriging and PLS with active learning to assist 𝐼𝐹 (𝝃 (𝑖) )
reliability analyses with high-dimensional problems. Proposed by Bouh- 𝑃𝑓 ≈ 𝑃𝑓 = 𝑖=1
̂ . (2)
𝑛𝑚𝑐𝑠
lel et al. [29], Kriging with PLS (KPLS) primarily aims to alleviate
the computational cost of Kriging in high-dimensional problems. The Other QOIs in reliability analysis are the quantiles of the output,
computational cost reduction has been successfully demonstrated in a which are not considered in this paper. The primary drawback of
set of benchmark problems [29]. PLS reduces the dimensionality of MCS is its slow convergence property which renders it unsuitable for
the design space by projecting the input and output variables into a applications involving computationally-intensive simulations. We use
new latent space. In contrast to the active subspace, KPLS exploits the surrogate models in place of the original simulations to accelerate the
dependencies between inputs and outputs without eliminating any of MCS process.

2
L.R. Zuhal et al. Reliability Engineering and System Safety 215 (2021) 107848

2.2. Research motivations 3.1. Kriging

The key idea of surrogate model-assisted reliability-analysis is to The prediction structure of Kriging consists of the mean function
construct an approximation model in lieu of the actual model 𝐺(𝝃), 𝜇(𝝃) and the stochastic component 𝑍(𝝃), reads as
̂
i.e., 𝐺(𝝃) ≈ 𝐺(𝝃), to yield an accurate estimation of 𝑃𝑓 . Therefore, the
𝑦(𝝃) = 𝜇(𝝃) + 𝑍(𝝃). (3)
̂
accuracy of 𝐺(𝝃) is particularly critical in estimating the limit state for
an accurate classification of fail and safe sampling points. This is differ- The stochastic process has a zero mean and a covariance structure
ent from the general implementation of surrogate models, where they defined by cov (𝑧(𝝃), 𝑧(𝝃 ′ )) = 𝜎𝑧2 𝑘(𝝃, 𝝃 ′ ), where 𝜎𝑧2 is the Kriging variance
need to be globally accurate. Active learning-based reliability analysis and 𝑘(𝝃, 𝝃 ′ ) is the correlation function. In OK, we assume a constant
techniques carefully add samples so as to refine 𝐺(𝝃) ̂ at locations close mean 𝜇(𝝃) = 𝜇. In this work, we use the Gaussian correlation function
to the limit state boundary of 𝐺(𝝃). Such strategies require multiple that is defined by
̂
constructions of 𝐺(𝝃) due to the constant addition of new samples.

𝑚 ( |𝜉 − 𝜉 ′ |2 )
𝑖 𝑖
Estimating 𝑃𝑓 with a surrogate model raises two main issues: (1) the 𝑘(𝝃, 𝝃 ′ ; 𝜽) = exp − , (4)
surrogate should accurately represent the limit state function, and (2) 𝑖=1 2𝜃𝑖2
the training time of the surrogate model contributes to the overall time where 𝜽 = {𝜃1 , 𝜃2 , … , 𝜃𝑚 } is the vector of length scales. To construct
of the reliability analysis process. For the first issue, the effectiveness of a Kriging model, we need a finite sampling set with 𝑛 samples X =
the surrogate model depends primarily on the sample selection and the {𝝃 (1) , 𝝃 (2) , … , 𝝃 (𝑛) }𝑇 and the corresponding responses 𝒚 = {𝑦(1) , 𝑦(2) , … ,
choices of surrogate modeling technique and active learning method. 𝑦(𝑛) }𝑇 = {𝐺(𝝃 (1) ), 𝐺(𝝃 (2) ), … , 𝐺(𝝃 (𝑛) )}𝑇 . By defining a matrix 𝑹 as the
In this paper, we deal mainly with the second issue. The training time 𝑛 × 𝑛 correlation matrix between all points in X, where the (𝑖, 𝑗) entry is
assessment of surrogate model has not received much attention in liter- 𝑘(𝝃 (𝑖) , 𝝃 (𝑗) ; 𝜽), and 𝒓(𝝃) as the vector of correlation between 𝝃 with the 𝑖th
ature, as compared to its accuracy assessment, since the training cost of entry is 𝑘(𝝃 (𝑖) , 𝝃; 𝜽), the prediction of Kriging can be expressed as (where
the surrogate model is typically assumed to be negligible in comparison the notations inside the brackets for 𝒓 are dropped for simplicity):
with that of the computer simulation. However, some surrogate models,
̂ = 𝜇̂ + 𝒓𝑇 𝑹−1 (𝒚 − 𝟏𝜇).
𝑦(𝝃) ̂ (5)
particularly Kriging, are known to suffer from the time-consuming
training process which is exacerbated in high-dimensional problems. The point-wise mean-squared error of Kriging can be calculated by
When an active learning is implemented, training time becomes even
more crucial since the surrogate model training needs to be repeated 𝑠̂2 (𝝃) = 𝜎𝑧2 (1 − (𝒓𝑇 𝑹−1 𝒓)) + (𝟏𝑇 𝑹−1 𝒓 − 1)𝑇 (𝟏𝑇 𝑹−1 𝟏)−1 (𝟏𝑇 𝑹−1 𝒓 − 1). (6)
multiple times as we add more samples. Since we are dealing with high- We train the length scales of the Kriging model via the maximum
dimensional reliability analysis, both of these issues become salient and likelihood estimation (MLE):
it is important to take the two aspects into account.
𝑛 𝑛 1 (𝒚 − 𝟏𝜇)𝑇 𝑹−1 (𝒚 − 𝟏𝜇)
In some high-dimensional problems, one can take advantage of the ln L(𝜽) = − ln(2𝜋) − ln(𝜎𝑧2 ) − ln(|𝑹|) − . (7)
2 2 2 2𝜎𝑧2
underlying structure of the problem (e.g., by using the principal com-
ponents that can lower their effective dimensionality) for the benefit By using MLE, we can obtain the Kriging mean and variance by:
of surrogate model construction. We are specifically interested in using
𝜇̂ = (𝟏𝑇 𝑹−1 𝟏)−1 𝟏𝑇 𝑹−1 𝒚. (8)
KPLS for handling high-dimensional reliability analysis problems due
to its fast construction time. There have been demonstrated successes and
in using KPLS to accelerate kriging training time in the context of a 1
general approximation problem and efficient global optimization, but 𝜎̂ 𝑧2 =(𝒚 − 𝟏𝜇)𝑇 𝑹−1 (𝒚 − 𝟏𝜇). (9)
𝑛
not yet in reliability analysis applications. In this paper, we will assess To perform the MLE optimization, we use the limited-memory Broy-
the potential and capability of KPLS in solving high-dimensionality den–Fletcher–Goldfarb–Shanno with box constraint (L-BFGS-B) algo-
reliability analysis problems. KPLS performance will be compared to rithm with five restarts [39], where each restart uses a unique initial
that of ordinary kriging (OK). In particular, this paper tries to answer point.
and discuss the following questions:

1. Could KPLS reduce the number of function evaluations required 3.2. AK-MCS algorithm
to accurately estimate 𝑃𝑓 ?
2. Could KPLS obtain good estimation of 𝑃𝑓 while reducing the The AK-MCS algorithm consecutively adds new samples to refine the
total CPU time? surrogate model close to the limit state [13]. The two main components
3. How significant is the reduction in the CPU time obtained by of AK-MCS are the Kriging model and the learning function. In this
KPLS in solving a high-dimensional reliability analysis problem regard, Kriging is used as the surrogate model to approximate the limit
state function and estimate the failure probability. On the other hand,
compared to OK?
the learning function is used as a criterion to add new samples that will
4. What is a suitable number of principal components to ensure fast
refine the estimation of failure probability. The pseudocode of AK-MCS
convergence and good accuracy of 𝑃̂𝑓 ?
is shown below:
To answer these research questions, we perform experiments with a
1. Generation of a Monte Carlo population X𝑚𝑐𝑠 consisting of 𝑛𝑚𝑐𝑠
suite of benchmark test problems.
samples according to 𝝆(𝝃) in the input space.
2. Construction of the Kriging surrogate model using the design of
3. Kriging with partial least squares experiment X and 𝒚 consisting of 𝑛𝑖𝑛𝑖𝑡 samples.
3. Estimation of the failure probability 𝑃̂𝒇 using the Kriging surro-
In this paper, we will model 𝐺(𝝃) with OK and KPLS. The perfor- gate model and X𝑚𝑐𝑠 .
mances will be compared and discussed. AK-MCS will be used as the 4. Identification of the point 𝝃 𝑜𝑝𝑡 from X𝑚𝑐𝑠 that optimizes the
active learning method. Each of these methods will be described briefly learning function.
below. Unless otherwise stated, any mentions of Kriging will refer to 5. Evaluation of 𝝃 𝑜𝑝𝑡 using a computer simulation to obtain 𝑦𝑜𝑝𝑡 =
OK. 𝐺(𝝃 𝑜𝑝𝑡 ).

3
L.R. Zuhal et al. Reliability Engineering and System Safety 215 (2021) 107848

6. Enrichment of the design of experiment by adding 𝝃 𝑜𝑝𝑡 into X i.e., the KPLS model [29]. KPLS is essentially a Kriging variant that
and 𝑦𝑜𝑝𝑡 into 𝒚. retains Kriging properties, so it also provides both the prediction and
7. Repeat steps 2 to 6 until a stopping condition is reached. the mean-squared error necessary for the AK-MCS algorithm.
Center to KPLS is the PLS algorithm, which works by discovering the
In this paper, we opt for the U-function as the learning function principal directions in the original space that best describe the output.
although it is possible to use other learning functions such as ex- After scaling the original variables 𝝃 and make them mean-centered,
pected feasibility function [12], least improvement function [14], or the principal component 𝒕(𝑙) is sought by finding the direction 𝒘 that
reliability-based expected improvement [15]. The U-function is chosen maximizes the squared covariance between 𝒕(𝑙) = 𝝃 (𝑙−1) 𝒘(𝑙) and 𝒚 (𝑙−1) ,
due to its simplicity and it has also been used for other similar applica- reads as
tions such as system reliability analysis [16], estimation of the failure { 𝑇 𝑇 𝑇
probability function [40], and computation of a very small failure (𝑙) arg max 𝒘(𝑙) 𝝃 (𝑙−1) 𝒚 (𝑙−1) 𝝃 (𝑙−1) 𝒘(𝑙)
𝒘 = 𝑇 (15)
probability [41]. The U-function 𝑈 (𝝃) can be expressed as such that 𝒘(𝑙) 𝒘(𝑙) = 1.

|𝐺(𝝃)|
̂ To find the first principal component, we set 𝝃 (0) = X and 𝒚 (0) = 𝒚. The
𝑈 (𝝃) = , (10)
𝑠(𝝃)
̂ residual matrices 𝝃 (𝑙) and 𝒚 (𝑙) , for 𝑙 = 1, … , 𝑞 (where 𝑞 is the maximum
where the U-value is the reliability index that quantifies the risk of number of principal components retained) from the local regression of
misclassification. Points with minimum U-values tend to be of high 𝝃 (𝑙−1) and 𝒚 (𝑙−1) can then be defined as:
risks, that is, either close to the limit state, highly uncertain, or both. 𝝃 (𝑙) =𝝃 (𝑙−1) − 𝒕(𝑙) 𝒑(𝑙)
Thus, AK-MCS that employs U-function as the learning function primar- (16)
𝒚 (𝑙) =𝒚 (𝑙−1) − 𝑐𝑙 𝒕(𝑙) .
ily adds new samples that are predicted to be close to the limit state
or have high 𝑠(𝝃)̂ [13]. In this paper, the estimated failure probability where 𝒑(𝑙) is a 1 × 𝑚 vector that contains the coefficients of the loc-
is estimated using 106 MCS samples drawn from the input distributions al regression 𝝃 (𝑙−1) onto the principal component 𝒕(𝑙) , and 𝑐𝑙 is the
applied to the surrogate model. coefficient of the local regression of 𝒚 𝑙−1 onto the principal component
The AK-MCS algorithm originally uses a stopping criterion based on 𝒕(𝑙) . We can then calculate 𝒕(𝑙) as
U-function. That is, the algorithm terminates when it reaches min(𝑈 ) ≥
𝒕(𝑙) = 𝝃 (𝑙−1) 𝒘(𝑙) = 𝝃𝒘∗(𝑙) , (17)
2. As an alternative, Schobi, Sudret, and Marelli proposed a stopping
criterion that is based on the estimate of the statistic of interest [42], which are the principal components that represent the new coordinates
reads as from the rotation of the original coordinate system. The matrix 𝑾 ∗ =
[ (1) ]
𝑃̂𝑓+ − 𝑃̂𝑓− 𝒘∗ , … , 𝒘∗(𝑞) can be calculated by
≤ 𝛾𝑃̂ , (11)
𝑃̂𝑓 𝑓
𝑾 ∗ = 𝑾 (𝑷 𝑇 𝑾 )−1 (18)
[ (1) ] [ (1)𝑇 𝑇]
where 𝛾𝑃̂ = 0.05 is suggested. 𝑃̂𝑓+ and 𝑃̂𝑓− are, respectively, the upper where 𝑾 = 𝒘 , … , 𝒘 (𝑞) and 𝑷 = 𝒑 , … , 𝒑 (𝑞) .
𝑓
and the lower bound of failure probabilities and defined as In principle, PLS performs a rotation of the original space 𝝃 into
a new space 𝒕 that is defined by the principal directions by using the
𝑃̂𝑓± = P(𝑦(X
̂ 𝑚𝑐𝑠 ) ∓ 𝑘𝑠(X𝑚𝑐𝑠 ) ≤ 0), (12)
information from matrix 𝑾 ∗ , as explained in Eq. (17). The matrix 𝑾 ∗ is
where 𝑘 is set to 𝑘 = 1.96. important because it is the main information used to create a new ker-
In our paper, we experimented with 𝑛𝑚𝑐𝑠 = 106 to calculate the nel function for the KPLS. That is, instead of rotating the coordinates,
stopping criterion as defined in Eq. (11). However, by using Eq. (11), KPLS uses a new correlation function defined as 𝑘(𝐹𝑙 (.), 𝐹𝑙 (.)) that takes
we found that the AK-MCS only converged on the low-dimensional into account the principal components, reads as
problem (i.e., the 10-dimensional bridge truss structure problem). In ∏
𝑞
high-dimensional problems (𝑚 ≥ 40), AK-MCS did not converge to the 𝑘𝑘𝑝𝑙𝑠 (𝝃, 𝝃 ′ ) = 𝑘𝑙 (𝐹𝑙 (𝝃), 𝐹𝑙 (𝝃 ′ )) (19)
specified 𝛾𝑃̂ although the 𝑃̂𝑓 already technically converged to the true 𝑖=1
𝑓 [
value. We then adopted a more conventional stopping criterion [43] where 𝐹𝑙 ∶ 𝐵 → 𝐵, 𝐵 is the hypercube included in R𝑚 , and 𝝃 → 𝑤(𝑙) ∗𝑖 𝜉1 ,
based on the relative difference between 𝑃̂𝑓 in several subsequent (𝑙) ]
… , 𝑤∗𝑖 𝜉𝑚 . That is, 𝐹𝑙 does not reduce the dimensionality of the prob-
iterations, reads as lem but only uses the principal components to build a new kernel that
| 𝑃̂𝑓 ,𝑖 − 𝑃̂𝑓 ,𝑖−1 | takes such information into account. The Gaussian correlation function
| | ≤ 𝜅, (13)
| | for KPLS then reads as
| 𝑃̂𝑓 ,𝑖 | [ (𝑙)
∏𝑞 ∏ 𝑚
(𝑤∗𝑖 𝜉𝑖 − 𝑤(𝑙) ′ ]
∗𝑖 𝜉𝑖 )
where 𝜅 is a threshold constant and 𝑖 the number of iteration. The first 𝑘(𝝃, 𝝃 ′ ; 𝜽) = exp . (20)
criterion is coupled with an additional criterion that requires the value 𝑙=1 𝑖=1 2𝜃𝑙2
at the evaluated point close to zero, reads as It can be seen from Eq. (20) that the vector of length scale for the new
| 𝑔(𝝃 (𝑖) ) | Gaussian kernel is defined as 𝜽 = {𝜃1 , 𝜃2 , … , 𝜃𝑞 }, where 𝑞 < 𝑚, in which
| | ≤ 𝜅, (14)
| | the information from matrix 𝑾 ∗ also enters the formulation. One then
| 𝑔(𝝃 (1) ) |
needs to train 𝑞 hyperparameters instead of 𝑚 hyperparameters as in
where 𝑔(𝝃 (1) ) is the limit state function evaluated at the first itera- the conventional Kriging. Because 𝑞 < 𝑚, training a KPLS model takes
tion of AK-MCS. To increase the confidence in the estimated failure shorter time than training a Kriging model since there are only 𝑞 length
probability, we set a rather strict stopping criterion. In this paper, scales to tune. As a consequence, the hyperparameter optimization
the conditions in Eqs. (13) and (14) (with constant 𝜅 is set to 0.05) process is faster. Thus, the idea is to utilize 𝑾 ∗ to build the Kriging
should be fulfilled within 20 subsequent iterations instead of 2 as in model in the principal components. It is important to find 𝑞 that can
the original paper [43]. explain most of the variability of the original function to ensure model
accuracy, and yet 𝑞 ≪ 𝑛 to justify the computational cost benefits.
3.3. Kriging with partial least squares It is worth noting that KPLS is a two-step non-intrusive process.
First, PLS is performed to calculate the principal components. Second,
It is worth noting that any type of surrogate model that provides KPLS then uses the information from PLS to construct a new corre-
the uncertainty structure can be used within the AK-MCS algorithm. In lation function. The subsequent procedures are the same as those of
this paper, we study an alternative to the conventional Kriging model, a standard Kriging model. It is important to decide the number of

4
L.R. Zuhal et al. Reliability Engineering and System Safety 215 (2021) 107848

Fig. 1. Plots of number of updates versus convergence criterion for the linear 40-dimensional problem.

principal components used in KPLS. In the work by Bouhlel et al. [29], persists in all problems except for the bridge truss structure problem,
a cross-validation procedure is used to determine a sufficient number of which is a low-dimensional problem. This convergence problem of
principal components. The drawback of such a procedure is that several the uncertainty-based stopping criterion is the main reason why we
intermediate KPLS models need to be constructed which add the CPU adopted the 𝜅-stopping criterion (Eqs. (13) and (14)) for the AK-MCS.
time. Therefore, we use a fixed number of principal components for For all problems, we repeat the computational experiments five
KPLS during the entire iterations of AK-MCS. Implementation of KPLS times and we monitor the convergence of 𝑃̂𝑓 with respect to the CPU
within AK-MCS is simply done by replacing the Kriging surrogate model time and the number of function evaluations. However, for the 100-
(see the pseudocode in Section 3.2) with the KPLS model. Moreover, dimensional problem, we only repeated the experiment three times due
users need to define the number of principal components 𝑞 to define
to the high computational cost required to train OK. The wall clock
the KPLS model. The rest of the procedure is the same as the original
time of a single run is denoted as 𝑡. The number of function evaluations
AK-MCS with the ordinary Kriging model. In this paper, all codes are
and CPU time to reach a sufficiently accurate estimation of 𝑃𝑓 is also
written in Python 3. The matrix inversion, which is the most expensive
monitored, in which we define 𝑛𝑢𝑝𝑑 (𝜀 < 5%) and 𝑡(𝜀 < 5%) as the
task in the hyperparameter optimization, is done by the NumPy linear
number of updates and the wall clock time where subsequent updates
algebra functions that rely on BLAS and LAPACK. All experiments
consistently yield 𝜀 < 5%, where 𝜀 is defined as
are performed by using a personal computer with Core i7-7700HQ
2.80 GHz. |𝑃̂𝑓 − 𝑃𝑓 |
𝜀= . (21)
𝑃𝑓
4. Computational results on benchmark applications
Consistent 𝜀 < 5% throughout the AK-MCS iterations indicates that
We investigate the potential of KPLS to accelerate the active learn- the estimation is already stable. We also define similar performance
ing process in reliability analysis from two perspectives, namely, (1) metrics for the condition where the AK-MCS stops according to the 𝜅-
the CPU time and (2) the number of evaluations required for conver- stopping criterion to reach 𝜅 < 5%, namely 𝑛𝑢𝑝𝑑 (𝜅 < 5%), 𝑡(𝜅 < 5%),
gence. The first point is our primary interest since it taps into KPLS’ and 𝑃̂𝑓 (𝜅 < 5%), where 𝑃̂𝑓 (𝜅 < 5%) is the corresponding estimated
main strength. By investigating the second viewpoint, we can assess failure probability. Finally, similar performance metrics are also de-
KPLS’ potential to reduce the number of function evaluations, which is fined at the maximum number of updates, namely, 𝑡(𝑛𝑢𝑝𝑑 = 𝑛𝑚𝑎𝑥 ), and
particularly relevant when the function evaluation is highly expensive 𝑃̂𝑓 (𝑛𝑢𝑝𝑑 = 𝑛𝑚𝑎𝑥 ), where 𝑛𝑚𝑎𝑥 is the maximum number of updates. We
(e.g., in hours or days). To be more exact, we aim to investigate the po- also monitor the number of independent runs that fail to reach 𝜀 < 5%
tential advantage of KPLS to reduce the number of function evaluations within a specified maximum number of iterations (i.e., 𝑁𝑓 𝑎𝑖𝑙 ). Table 1
to reach a sufficiently accurate estimation of 𝑃𝑓 as compared to OK. In summarizes the definitions of the performance metrics and results used
addition, we also study how KPLS performance varies with the number in this paper.
of principal components. The goal is to identify the adequate number of Shown in tables of results are the mean of the performance metrics
principal components that can ensure good estimations of the 𝑃𝑓 while
and the corresponding coefficient of variation (COV), which is defined
simultaneously reducing the CPU time compared to OK. In this paper,
as the ratio of standard deviation and the mean, for 𝑛𝑢𝑝𝑑 , 𝑡, and 𝑃̂𝑓
the main objective is to reduce the CPU time for solving problems with
for multiple independent runs. In all tables, the COV for any metric (in
small failure probability (i.e., in the order of 10−2 , 10−3 , or 10−4 ).
percentage) is shown inside the bracket. The evolution of 𝜀 with respect
Solving problems with extremely small failure probability (i.e., in the
to the number of updates is also presented. Notice that, in practice, the
order of lower than 10−4 ) is the subject of future works.
costs of OK and KPLS depend on various implementation aspects of the
Fig. 1 shows the evolution of the criteria as defined in Eqs.
(11), (13), and (14) for one run of the linear 40-dimensional problem algorithm (e.g., the choice of hyperparameter optimization technique
(see Section 4). It can be seen that, regardless of the type of surro- and the matrix inversion method). At the end of this section, we give
gate model used, the uncertainty-based stopping criterion stalled at remarks on the use of KPLS for reliability analysis after we discuss the
𝑛𝑢𝑝𝑑 = 40, where 𝑛𝑢𝑝𝑑 is the number of updates, despite the accurate computational result.
prediction of the failure probability (see Fig. 1a and Section 4.2). The In this paper, we considered 𝑞 = 4 as the maximum number of
uncertainty-based criterion never reaches 𝛾 < 5%, which prevents it principal components. In the following discussion, we denote KPLS
from convergence. On the other hand, the criteria from Eqs. (13) and models with 𝑞 = 1, 2, 3, and 4 principal components as KPLS1, KPLS2,
(14) successfully reach 𝜅 < 5% after about 40 updates. The trend KPLS3, and KPLS4, respectively.

5
L.R. Zuhal et al. Reliability Engineering and System Safety 215 (2021) 107848

Fig. 2. Bridge truss structure sketch.

Table 1
Definitions of the performance metrics and reliability results.
Metric Definition
𝑁𝑓 𝑎𝑖𝑙 Number of independent runs that fail to reach 𝜀 < 5%
𝑛𝑢𝑝𝑑 (𝜀 < 5%) Number of updates to reach consistent 𝜀 < 5%
𝑡(𝜀 < 5%) Wall clock time to reach consistent 𝜀 < 5%
𝑛𝑢𝑝𝑑 (𝜅 < 5%) Number of updates to reach 𝜅 < 5% (Eqs. (13) and (14))
𝑡(𝜅 < 5%) Wall clock time to reach 𝜅 < 5% (Eqs. (13) and (14))
𝑃̂𝑓 (𝜅 < 5%) Final estimate of failure probability (according to Eqs. (13) and (14))
𝑡(𝑛𝑢𝑝𝑑 = 𝑛𝑚𝑎𝑥 ) Wall clock time to reach maximum number of updates (Eqs. (13) and (14))
𝑃̂𝑓 (𝑛𝑢𝑝𝑑 = 𝑛𝑚𝑎𝑥 ) Final estimate of failure probability at maximum number of updates
𝑡𝑖𝑡𝑒𝑟 Wall clock time for a specified number of iterations

Table 2 higher variance in KPLS’s prediction seems to be difficult to avoid. The


Random variables for the bridge truss structure problem.
result then suggests that it is more desirable to equip KPLS with other
Variable Units Distribution Mean Standard deviation stopping criteria. As shown in Table 4, the 𝜅-stopping criterion needed
𝐸1 , 𝐸2 [Pa] Lognormal 2.1 × 1011 2.1 × 1010 much fewer function calls to achieve sufficiently good accuracy.
𝐴1 m2 Lognormal 2.0 × 10−3 2.0 × 10−4
The results shown in Table 4 and Fig. 3 show that the KPLS variants
𝐴2 m2 Lognormal 2.0 × 10−3 2.0 × 10−4
𝑃1 , … , 𝑃6 N Gumbel 5.0 × 104 7.5 × 103 with fewer than three principal components perform worse than OK
since they need more updates to converge to 𝜀 < 5%. It can also be
seen that all methods converge to 𝜀 < 5% within the specified number
of iterations. The estimated failure probability at the maximum number
4.1. Benchmark problem 1: 10-dimensional bridge truss structure problem of updates and to reach 𝜅 < 5% is especially good for KPLS4. It can be
seen that the COV of the results according to the 𝜅-stopping criterion is
The first benchmark problem is a bridge-truss structure problem higher than that the uncertainty-based criterion; however, notice that
adopted from Schobi et al. [42], as illustrated in Fig. 2. Among all the COV of the former is still lower than 3%.
other benchmarked cases, this problem has the lowest dimensional- Fig. 3a also shows that KPLS4 is comparable to OK in terms of the
ity. The output of interest for this problem is the midspan deflec- convergence to the reference 𝑃𝑓 value. However, the computational
tion 𝑢(𝝃), which is evaluated by running a finite element analysis, benefit of KPLS is limited due to the low-dimensionality of the problem.
subject to uncertainties in the modulus of elasticity and the cross-
This observation is supported by the results presented in Fig. 3b, where
sectional area. The random variables are collected in a vector 𝝃 =
the training + prediction times for OK and KPLS are close to each
{𝐸1 , 𝐸2 , 𝐴1 , 𝐴2 , 𝑃1 , 𝑃2 , 𝑃3 , 𝑃4 , 𝑃5 , 𝑃6 }𝑇 , where 𝐸1 and 𝐴1 are the mod-
other. Note that Fig. 3b shows the actual wall-clock time of the active
ulus of elasticity and the cross-sectional area of the horizontal bars,
learning process. These results suggest that the standard OK is sufficient
respectively, and 𝐸2 and 𝐴2 represent the modulus of elasticity and the
where problem dimensionality is relatively low, and using KPLS might
cross-sectional area of the diagonal bars, respectively, and 𝑃1 , 𝑃2 , … , 𝑃6
be superfluous.
are the vertical loads on the upper nodes of the structure (see Table 2
The results also show that AK-MCS converged faster when the 𝜅-
for details of the input distributions). The limit state is defined as 𝑢(𝝃) >
stopping criterion is used compared to the uncertainty-based criterion.
10 cm which corresponds to 𝑃𝑓 = 0.00473, which is a relatively small
The estimated failure probabilities are also very close to the value from
failure probability. The FEM evaluation is very fast for this problem
MCS and sufficient for engineering accuracy, although not as close as
and can be assumed negligible. We started with 𝑛𝑖𝑛𝑖𝑡 = 10 samples and
that from the uncertainty-based stopping criterion. Regardless, for this
enriched the sampling plan with additional maximum 150 samples.
problem, there is no apparent advantage of using KPLS either from the
For this problem, the estimated failure probability reaches an error
viewpoint of the number of function calls and computational time.
level below 5% (compared to the value from MCS) if the value is
between the range of 𝑃̂𝑓 = 4.449 × 10−3 and 𝑃̂𝑓 = 4.966 × 10−3 . For
this problem, AK-MCS converged according to the uncertainty-based 4.2. Benchmark problem 2: Linear 40- and 100-dimensional performance
stopping criterion (Eq. (11)). Thus, we also depict the results according functions
to the uncertainty-based stopping criterion to be contrasted with those
of 𝜅-stopping criterion (Eqs. (13) and (14)). The second benchmark problem is a scalable linear function where
From the viewpoint of uncertainty-based stopping criterion (see the level of the 𝑃𝑓 does not change significantly with the increasing
Table 3), it can be seen that all KPLS variants require more function dimensionality of inputs [44]. This problem can be expressed as:
calls to converge compared to OK. Furthermore, we also observe that √ ∑
𝑚
the convergence takes more function evaluations with fewer princi- 𝐺(𝝃) = (𝑚 + 3𝜎 𝑚) − 𝑥𝑖 , (22)
pal components. The slower convergence of KPLS according to the 𝑖=1

uncertainty-based stopping criterion means that the variance of 𝑃̂𝑓 where 𝜎 = 0.2 and the fail region is defined as 𝐺(𝝃) < 0. The estimated
from KPLS is relatively high, which explains why it requires a higher failure probability for the 40- and 100-dimensional linear performance
number of function calls to stop according to the uncertainty-based functions are 𝑃𝑓 = 2.008 × 10−3 and 𝑃𝑓 = 1.61 × 10−3 , respectively.
stopping criterion compared to OK. However, interestingly, Fig. 3a We perform experiments with 𝑚 = 40 and 𝑚 = 100 by using 𝑛𝑖𝑛𝑖𝑡 = 20
shows that the convergence rate of KPLS is similar to that of OK. The and 𝑛𝑖𝑛𝑖𝑡 = 50, respectively. The number of additional samples is set

6
L.R. Zuhal et al. Reliability Engineering and System Safety 215 (2021) 107848

Table 3
Results for the 10-dimensional bridge truss structure problem according to the uncertainty-based stopping criterion (Eq. (11)). The estimated
failure probability from MCS equals to 𝑃𝑓 = 4.73 × 10−3 .
Method KPLS1 KPLS2 KPLS3 KPLS4 OK
𝑛𝑢𝑝𝑑 (𝛾 < 5%) 147 (4.23%) 134.20 (8.58%) 128.20 (12.65%) 118.80 (7.65%) 97 (12.66%)
𝑃̂𝑓 (𝛾 < 5%) 4.720E−3 (0.37%) 4.725E−3 (0.32%) 4.730E−3 (0.18%) 4.73E−3 (0.22%) 4.722E−3 (0.17%)
𝑡(𝛾 < 5%) 1.314E+3 s (7.76%) 1.52E+3 s (14.86%) 1.475E+3 s (23.03%) 1.449E+3 s (14.11%) 1.079E+3 s (25.34%)

Table 4
Results for the 10-dimensional bridge truss structure problem. The estimated failure probability from MCS equals to 𝑃𝑓 = 4.73 × 10−3 .
Method KPLS1 KPLS2 KPLS3 KPLS4 OK
𝑁𝑓 𝑎𝑖𝑙 0 0 0 0 0
𝑛𝑢𝑝𝑑 (𝜀 < 5%) 24.8 (21.97%) 23.6 (19.09%) 23.8 (27.61%) 19.19 (25.97%) 19.60 (14.24%)
𝑡(𝜀 < 5%) 1.128E+3 s (31.15%) 1.229E+3 s (30.52%) 1.325E+3 s (39.20%) 9.59E+2 s (46.81%) 9.65E+2 s (22.67%)
𝑛𝑢𝑝𝑑 (𝜅 < 5%) 41.6 (26.80%) 38.00 (9.66%) 35.2 (15.35%) 36.20 (26.38%) 35.39 (10.86%)
𝑃̂𝑓 (𝜅 < 5%) 4.746E−3 (0.98%) 4.716E−3 (0.92%) 4.738E−3 (2.42%) 4.731E−3 (1.50%) 4.729E−3 (0.80%)
𝑡(𝜅 < 5%) 1.775E+2 s (41.53%) 1.86E+2 s (19.20%) 1.654E+2 s (22.35%) 1.878E+2 s (38.90%) 2.23E+2 s (48.89%)
𝑃̂𝑓 (𝑛𝑢𝑝𝑑 = 100) 4.707E−3 (0.73%) 4.720E−3 (0.22%) 4.740E−3 (0.23%) 4.736E−3 (0.16%) 4.720E−3 (0.15%)
𝑡(𝑛𝑢𝑝𝑑 = 100) 8.691E+2 s (3.28%) 1.183E+3 s (2.50%) 1.272E+3 s (0.93%) 1.247E+3 s (1.23%) 1.343E+3 s (11.16%)

Fig. 3. Convergence of 𝑃̂𝑓 and the average training + prediction time (in seconds) for the 10-dimensional bridge problem.

to 100 and 150 for 𝑚 = 40 and 𝑚 = 100, respectively. For the 40- for the KPLS4 seems higher than that of the OK, the time reduction
dimensional problem, the upper and lower bound for the estimated and the number of iterations for every single run of the former is still
failure probability to reach an error level below 5% equal to 𝑃̂𝑓 = generally lower than the latter.
1.907 × 10−3 and 𝑃̂𝑓 = 2.1907 × 10−3 , respectively. As for the 100- For both 𝑚 = 40 and 𝑚 = 100, we observe that all KPLS variants
dimensional problem, the estimated failure probability reaches an error significantly underpredict the true 𝑃𝑓 at the beginning of AK-MCS. This
level below 5% if it is within the range of 𝑃̂𝑓 = 1.529 × 10−3 and means that, for this problem, KPLS cannot adequately capture the tail
𝑃̂𝑓 = 1.6905 × 10−3 . of the distribution with small initial experimental designs. However,
The results are shown in Tables 5 and 6 for the 40- and 100- as more samples are added via AK-MCS, KPLS3 and KPLS4 reach an
dimensional problem, respectively. The evolution of 𝜀 with respect to error level consistently below 5% with less number of function calls
the number of updates for the 40- and 100-dimensional problem are as compared to that of OK. It is also interesting to see that KPLS3
shown in Figs. 4a and 4b, respectively. Using the 𝜅-stopping criterion, and KPLS4, especially the latter, require fewer function evaluations
it can be seen that KPLS4 accurately estimated the true 𝑃𝑓 with less than OK to reach a consistent 𝜀 < 5%. This means that, regardless of
training time than that of OK for both the 40- and 100-dimensional the computational cost of function evaluation, KPLS4 always reduces
problem. The average time saving from KPLS4 according to the 𝜅- the overall CPU time compared to OK. The results also show that the
stopping criterion with respect to OK for the 40-dimensional problem number of principal components matters as evidenced by the disparity
is 1720 s. The time saving is even more impressive on the 100- between the performances of KPLS1 and KPLS4. Also, notice that KPLS1
dimensional problem, in which KPLS3 and KPLS4 reduced the order of fail to reach 𝜀 below 5% in the majority of independent runs. Using too
the computational time from 104 to 103 (according to 𝜀 < 5%) and 105 few principal components might lead to an oversimplified model, which
to 104 (according to the 𝜅 < 5% and the maximum number of updates). causes poor estimation of the failure probability. Considering that there
Such a dramatic reduction in CPU time is very advantageous if a single is no significant difference between the training times of the four KPLS
function evaluation costs roughly a few seconds to, say, five minutes. variants, KPLS4 is considered to be the most superior variant of KPLS
The estimation of failure probability from KPLS3 is also accurate but in this problem. We observe that, for the 100-dimensional problem, OK
not as accurate as KPLS4. From the viewpoint of the number of function experiences a fluctuation of 𝑃̂𝑓 until the 100th iteration while KPLS4
evaluations to reach 𝜀 < 5% or to converge according to the 𝜅-stopping already reaches a low level of error from about the 70th iteration.
criterion, KPLS3 and KPLS4 require fewer function calls than OK in By observing the combined training and prediction time of Kriging
average, especially for the 100-dimensional problem. Although the COV at each iteration as shown in Fig. 5, we notice a sudden drop in the

7
L.R. Zuhal et al. Reliability Engineering and System Safety 215 (2021) 107848

Table 5
Results for the linear 40-dimensional problem. Notice that KPLS1 did not convergence according to 𝜅-stopping criterion and fail to reach 𝜖 < 5%
in 4 out of 5 independent runs. The estimated failure probability from MCS equals to 𝑃𝑓 = 2.008 × 10−3 .
Method KPLS1 KPLS2 KPLS3 KPLS4 OK
𝑁𝑓 𝑎𝑖𝑙 4 0 0 0 0
𝑛𝑢𝑝𝑑 (𝜀 < 5%) 88 (1 run) 50.4 (18.69%) 42.6 (9.03%) 36.6 (10.87%) 48.6 (8.04%)
𝑡(𝜀 < 5%) 2.96E+3 s (1 run) 1.45E+3 s (33.83%) 1.12E+3 s (12.43%) 9.45E+3 s (18.80%) 2.69E+3 s (15.59%)
𝑛𝑢𝑝𝑑 (𝜅 < 5%) – 79.75 (29.53%) 63.25 (16.94%) 60.00 (20.40%) 63.75 (8.23%)
𝑃̂𝑓 (𝜅 < 5%) – 2.005E−3 (0.13%) 2.001E−3 (0.18%) 2.006E−3 (0.20%) 2.006E−3 (0.16%)
𝑡(𝜅 < 5%) – 2.96E+3 s (44.83%) 2.08E+3 s (30.54%) 1.94E+3 s (30.90%) 3.66E+3 s (16.98%)
𝑃̂𝑓 (𝑛𝑢𝑝𝑑 = 100) 1.878E−3 (3.7%) 2.006E−3 (0.06%) 2.005E−3 (0.1%) 2.006E−3 (0.06%) 2.006E−3 (0.11%)
𝑡(𝑛𝑢𝑝𝑑 = 100) 3.84E+3 s (3.91%) 4.18E+3 s (3.67%) 4.35E+3 s (3.49%) 4.36E+3 s (1.01%) 7.25E+3 s (3.21%)

Table 6
Results for the linear 100-dimensional problem. Notice that KPLS1 did not convergence with respect to 𝜅-stopping criterion. The estimated
failure probability from MCS equals to 𝑃𝑓 = 1.61 × 10−3 .
Method KPLS1 KPLS2 KPLS3 KPLS4 OK
𝑁𝑓 𝑎𝑖𝑙 3 0 0 0 0
𝑛𝑢𝑝𝑑 (𝜀 < 5%) – 109.66 (18.01%) 90.66 (7.74)% 78.6 (7.73%) 108.3 (2.32%)
𝑡(𝜀 < 5%) – 1.28E+4 s (27.64%) 9.25E+3 s (14.08%) 7.53E+3 s (11.38%) 7.48E+4 s (5.10%)
𝑛𝑢𝑝𝑑 (𝜅 < 5%) – 145 (16.99%) 130.25 (19.18%) 124.25 (17.77%) 136.25 (6.89%)
𝑃̂𝑓 (𝜅 < 5%) – 1.57E−3 (4.10%) 1.603E−3 (0.48%) 1.608E−3 (0.26%) 1.608E−3 (0.11%)
𝑡(𝜅 < 5%) – 1.95E+4 s (9.06%) 1.68E+4 s (29.99%) 1.54E+4 s (27.97%) 9.39E+4 s (13.62%)
𝑃̂𝑓 (𝑛𝑢𝑝𝑑 = 150) 1.48E−4 (22.95%) 1.57E−3 (4.07%) 1.606E−3 (0.31%) 1.609E−3 (0.19%) 1.608E−3 (0.11%)
𝑡(𝑛𝑢𝑝𝑑 = 150) 1.67E+4 s (2.00%) 2.06E+4 s (3.12%) 2.01E+4 s (4.31%) 2.04E+4 s (0.62%) 1.05E+5 s (5.00%)

Fig. 4. Mean convergence of 𝑃̂𝑓 and the CPU time (in seconds) to reach 𝜀 < 5% for the 40- and 100-dimensional analytical problem.

Fig. 5. Average combined training and prediction time and function evaluation of the Kriging model (in seconds) for the 40- and 100-dimensional linear problem. The cost of
function evaluation is negligible.

8
L.R. Zuhal et al. Reliability Engineering and System Safety 215 (2021) 107848

Fig. 6. Evolution of the log-likelihood for one sample run of OK for the linear 40- and 100-dimensional problem.

Table 7
Results for the nonlinear 40-dimensional problem. Notice that the AK-MCS did not converge according to the 𝜅−stopping criterion. The failure
probability estimated by MCS is 𝑃𝑓 = 3.647 × 10−4 .
Method KPLS1 KPLS2 KPLS3 KPLS4 OK
𝑁𝑓 𝑎𝑖𝑙 0 0 0 0 0
𝑛𝑢𝑝𝑑 (𝜀 < 5%) 34.2 (59.40%) 44.75 (37.58%) 40.20 (33.34%) 33 (37.54%) 43.2 (55.94%)
𝑡(𝜀 < 5%) 3.25E+3 s (73.44%) 4.74E+3 s (48.03%) 4.28E+3 s (40.33%) 3.34E+3 s (42.08%) 8.81E+3 s (90.03%)
𝑃̂𝑓 (𝑛𝑢𝑝𝑑 = 100) 3.53E−4 (1.23%) 3.48E−4 (3.49%) 3.606E−4 (2.11%) 3.65E−3 (2.55%) 3.67E−3 (1.81%)
𝑡(𝑛𝑢𝑝𝑑 = 100) 1.28E+4 s (2.13%) 1.55E+4 s (20.38%) 1.46E+4 s (3.10%) 1.51E+4 s (5.19%) 3.18E+4 s (4.38%)

wall-clock time which is accompanied by a sudden increase in the training time is significantly different. The maximum computational
accuracy of the failure probability (notice that the cost of function time, i.e., 𝑡(𝑛𝑢𝑝𝑑 = 100), in the nonlinear problem is about 3–5 times
evaluation is negligible). To explain this behavior, we observe the higher than that of the linear problem. Such an increase in CPU time
evolution of the MLE iteration on one independent run of OK. Fig. 6 indicates that the nonlinearity of the function affects the training time
displays the optimized log-likelihood of OK obtained from 5 L-BFGS-B of the Kriging model. Since we use the same hyperparameter optimiza-
runs for the 40- and 100-dimensional problems. It is clear here that tion technique, the primary cause of this increase in training time is
the log-likelihood value does not improve until about the 40th and the the longer convergence time of the L-BFGS-B optimization. Notice that,
97th iteration for the 40- and 100-dimensional problem, respectively. regardless of the type of surrogate model used, the AK-MCS did not
This clearly indicates the difficulty of performing a high-dimensional converge to 𝜅 < 5%. The very small failure probability of this problem,
hyperparameter optimization for the OK model, which is characterized which is in the order of 10−4 , leads to a significant fluctuation in the
by multiple local optima in the likelihood function [45]. It can also be estimated value.
seen that the training time of OK is notably reduced when the training Compared to the previous problem, it is more difficult to observe
sample is already sufficient, as the search for the optimal likelihood clear trends in the results. Performance-wise, KPLS1 and KPLS4 yield
becomes easier at this phase. Conversely, such a behavior does not a significant difference of the number of iterations to reach 𝜀 < 5%
occur in all of the KPLS variants which tends to follow a linear trend, when compared with that of OK. OK requires more updates to reach
which is caused by the simpler hyperparameter optimization process consistent 𝜀 < 5%, which is contributed by the fluctuation in 𝜀,
involving a reduced number of hyperparameters. especially between the 50th and 70th iterations (see Fig. 7a). This
problem is considered more challenging than the linear problem, due
4.3. Benchmark problem 3: a nonlinear 40-dimensional performance func- to the high dimensionality and the very small failure probability that
tion needs to be estimated. These challenges are reflected in the higher
𝜀, as compared to that of the linear problem. We can see that all
The third problem features non-linear terms [26], which can be KPLS variants have lower 𝑡(𝑛𝑢𝑝𝑑 = 100) compared to OK, but the
expressed as: difference between each KPLS variant is not significant. Interestingly,

𝑚−1 KPLS1 converges to 𝜀 < 5% faster than KPLS2 and KPLS3. However, this
𝑓 (𝝃) = 3 − 𝜉𝑚 + 0.01 𝜉𝑖2 , (23) is likely due to the statistical outlier problem and does not indicate that
𝑖=1 KPLS1 truly converges faster than KPLS2 and KPLS3. As can be seen in
where 𝜉1 , 𝜉2 , … , 𝜉𝑚 are standard normal variables and 𝑚 = 40. The fail Fig. 7a, although KPLS2 reaches the defined error threshold, the later
region is defined as 𝐺(𝝃) < 0. Calculation with MCS estimates that the predictions of 𝑃𝑓 is not as good as those of KPLS3 and KPLS4. Individual
failure probability for this problem is very small, i.e., 𝑃𝑓 = 3.647 × 10−4 . results show that the high mean 𝑛𝑢𝑝𝑑 (𝜀 < 5%) of OK is due to one
The size of the experimental design is set to 𝑛𝑖𝑛𝑖𝑡 = 20 and enriched with particular run that reaches 𝜀 < 5% at the 85th iteration, which indicates
additional 100 samples. For this problem, the upper and lower bound that KPLS4 is more stable than OK in estimating 𝑃𝑓 . To reach 𝜀 < 5%,
for the estimated failure probability to reach an error level below 5% we observe that the saving in training and prediction time obtained
equal to 𝑃̂𝑓 = 3.464 × 10−4 and 𝑃̂𝑓 = 3.829 × 10−4 , respectively. by KPLS4 with respect to that of OK is about 5470 s. If the function
The results are shown in Table 7 and Fig. 7. First, despite the same evaluation is relatively cheap (e.g., about 10–20 s for one evaluation),
dimensionality as the previous linear problem with 𝑚 = 40, the overall KPLS4 is attractive due to the time saving that it offers. However, even

9
L.R. Zuhal et al. Reliability Engineering and System Safety 215 (2021) 107848

Fig. 7. Convergence of 𝑃̂𝑓 and the average combined training and prediction time (in seconds) for the nonlinear 40-dimensional performance function.

Fig. 8. Definition of the domain and boundary conditions for the heat conduction problem and the first 20 EOLE modes used to discretize the random field.

if the function evaluation is highly expensive, KPLS4 is more beneficial failure probability. The computational domain 𝐷 is a box bounded
than OK since it requires fewer function evaluations to reach 𝜀 < 5%. by the bottom left and the top right corner which coordinates are
From the viewpoint of the 𝜅-stopping criterion, the results show (−0.5, −0.5) m and (0.5, 0.5) m, respectively. Thus, we define the domain
that all methods did not satisfy the stopping criterion even until the as 𝐷 = (−0.5, −0.5) m × (0.5, 0.5) m (see Fig. 8). The partial differential
number of maximum iterations are reached. It seems like the very equation (PDE) that describes the temperature field 𝑇 (𝒛), where 𝒛 ∈ 𝐷,
small failure probability of this problem leads to the convergence reads as
difficulty since such values are very prone to fluctuation. However, the
estimated failure probability at 𝑛𝑢𝑝𝑑 = 100 yielded by KPLS4 and OK are − ∇(𝜅ℎ (𝒛)∇𝑇 (𝒛)) = 𝑄𝐼𝐴 (𝒛). (24)
sufficiently accurate when compared to the result of MCS. The KPLS4 The boundary conditions are defined as 𝑇 = 0 on the top and ∇𝑇 ⋅ 𝒏 = 0
offers a more significant time saving of the training and prediction on the rest of the boundaries. The heat source comes from the square
when compared to OK to reach 𝑛𝑢𝑝𝑑 = 100, i.e., the time saving is domain 𝐴 = (0.2, 0.2) m ×(0.3, 0.3) m with 𝑄 = 2⋅103 W/m3 , as indicated
about 24,700 s (almost 7 h) to reach similar accuracy. The other KPLS by the indicator function 𝐼𝐴 that is equal to 1 if 𝒛 ∈ 𝐴 and 0 otherwise
variants are not as accurate as KPLS4 and OK, indicating that a low (see Fig. 8a for illustration).
number of principal components is not sufficient to capture the tail of
Instead of treating the diffusion coefficient 𝜅ℎ (𝒛) as a constant, 𝜅ℎ (𝒛)
the distribution in this problem. Fig. 7b shows the exponential increase
is set as a lognormal random field defined as 𝜅ℎ (𝒛) = exp[𝑎𝜅ℎ +𝑏𝜅ℎ 𝑔(𝒛)],
in the training time of OK while the trend is more linear for the KPLS.
where 𝑔(𝒛) is a standard Gaussian random field with a Gaussian auto-
However, we observe that there is no sudden drop in the training time
correlation function 𝜌(𝒛, 𝒛′ ) = exp((−𝒛 − 𝒛′ )2 ∕𝑙2 ) and 𝑙 = 0.2 is the
of OK for the 40-dimensional nonlinear problem, which implies that
correlation length. The parameters 𝑎𝜅ℎ and 𝑏𝜅ℎ are set such that the
the sample size is already sufficient to create a proper Kriging model.
mean and standard deviation of 𝜅ℎ are 𝜇𝜅ℎ = 1 W∕◦ C m and 𝜎𝜅ℎ =
0.3 W∕◦ C m, respectively. The discretization of 𝑔(𝒛) is performed by
4.4. Benchmark problem 4: a 53-dimensional heat conduction problem using the expansion optimal linear estimation (EOLE) method proposed
by Li and Der Kiureghian [47], that is
The final benchmark problem is a 53-dimensional heat conduction
problem adopted from Konakli and Sudret [46]. A finite difference ∑𝑀
𝜉𝑖 𝑇
𝑔(𝒛)
̂ = √ 𝜙𝑖 𝑪 𝒛𝜻 (𝒛) (25)
solver is used to calculate the output (temperature field) and the 𝑖=1 𝑙𝑖

10
L.R. Zuhal et al. Reliability Engineering and System Safety 215 (2021) 107848

Fig. 9. Convergence of 𝑃̂𝑓 for the 53-dimensional heat conduction problem.

where {𝜉1 , … , 𝜉𝑀 } are standard normal variables; 𝑪 𝒛𝜻 is a vector with


elements 𝑪 (𝑘)𝒛𝜻
= 𝜌(𝒛, 𝜻 𝑘 ) for 𝑘 = 1, … , 𝑝; and (𝑙𝑖 , 𝜙𝑖 ) are the pair of
eigenvalues and eigenvector of the 𝑝 × 𝑝 correlation matrix between all
elements, i.e., 𝑪 𝒛𝒛 with its 𝑘, 𝑙 element is 𝑪 (𝑘,𝑙)
𝒛𝒛 = 𝜌(𝜻 𝑘 , 𝜻 𝑙 ). For the heat
conduction problem, we retain 𝑀 = 53 basis functions that is obtained
by
∑𝑀
𝑖=1 𝑙𝑖
∑𝑝 ≥ 0.99. (26)
𝑙
𝑖=1 𝑖
The first 20 EOLE modes for this problem are shown in Fig. 8b.
We solve Eq. (24) by using an in-house implicit scheme solver and
structured mesh with 100 × 100 elements. The average temperature
𝑇̃ in the domain 𝐵 = (−0.3, −0.3) m × (−0.2, −0.2) m is set as the output
of interest as computed by the PDE solver. The threshold for the limit
state is defined as 𝑇̃ > 3 ◦ C on domain B. One single evaluation of the
PDE takes approximately 9 s, which is relatively fast but not negligible.
After running 10 000 PDE simulations, we obtain 𝑃𝑓 = 0.041. This
Fig. 10. Average training time of the Kriging model and function evaluation for the 53-
calculation takes roughly 25 h, which shows the high computational dimensional heat conduction problem (in seconds).
cost associated with MCS, even when one evaluation only takes about
10 s to complete. The size of the initial experimental design is set to
𝑛𝑖𝑛𝑖𝑡 = 50, which is the lowest sample size used in the experiment by
reduces the CPU time required to reach a tolerable error level. This is
Konakli and Sudret [46], with additional 100 samples.
primarily caused by the reduction in the number of function evaluations
The convergence results and the statistics are shown in Fig. 9 and
required in KPLS4 for convergence. Besides, we also observe that the
Table 8. The average time versus average 𝜀 is shown on Fig. 9b.
reduction in the overall CPU time is more significant compared to
Also, notice that the CPU time shown in Fig. 9b already includes
those of the previous benchmark problems. OK especially consumes a
the costs of PDE evaluations. Considering the relatively cheap cost of
considerable amount of time to train due to the expensive hyperparam-
the PDE evaluations, the main contributor to the overall CPU time to
eter training in a high-dimensional space. As such, OK experiences a
train OK seems to be the training and prediction time. We can see
fluctuation in the estimation of the 𝑃𝑓 value. It is also worth noting that
that at the beginning of the active learning iteration procedure, the
although KPLS1 requires less time to finish 100 iterations compared
KPLS accuracy is low. Their accuracy increases with more samples
to other variants of KPLS, it is the least efficient method since it is
added and finally surpasses that of OK. KPLS3 and KPLS4 are shown
to be more efficient than OK when comparing the required number slower than other KPLS models in reaching the tolerable error level.
of updates for convergence. KPLS1 and KPLS2, on the other hand, Furthermore, KPLS1 fails to reach the 5% error level in 3 out of 5
require more updates to converge to 𝜀 < 5% compared to OK. KPLS4 is independent runs.
also notably more accurate than OK according to the 𝜅-convergence
criterion and maximum number of updates. Also, KPLS4 converged 4.5. Breakdown of computational time
faster with less number of function evaluations than OK according to
the 𝜅-convergence criterion with good accuracy. To further investigate the computational time of KPLS, We per-
It is also interesting to see that all variants of KPLS are significantly formed a breakdown analysis of the CPU time for the 100-dimensional
faster than OK as can be observed in Fig. 10. The speedup is highly linear performance function, and the results are shown in Table 9.
significant as indicated, for example, by the total computational cost of The 100-dimensional linear performance function was selected due to
KPLS4 that is about ten times cheaper compared to that of OK. We can its very high dimensionality; hence, it is the most computationally
also see a visible trend that, although using a lower number of principal expensive problem in this paper. The analysis is performed for the
components can accelerate the total process of active learning, using initial sample set (𝑛 = 40) and after 150 updates (𝑛 = 190), in which
a higher number of principal components is more beneficial since it we break down the wall-clock time into training time and prediction

11
L.R. Zuhal et al. Reliability Engineering and System Safety 215 (2021) 107848

Table 8
Table of results for the 53-dimensional heat conduction problem. Notice that 𝑡 already includes the cost of PDE evaluation. The failure probability
estimated by MCS is 𝑃𝑓 = 4.1 × 10−2 .
Method KPLS1 KPLS2 KPLS3 KPLS4 OK
𝑁𝑓 𝑎𝑖𝑙 3 0 0 0 0
𝑛𝑢𝑝𝑑 (𝜀 < 5%) 92.4 (14.06%) 70.4 (16.91%) 73.00 (11.50%) 68.88 (23.60%) 81.4 (20.23)%
𝑡(𝜀 < 5%) 1.259E+3 s (13.70%) 1.084E+3 s (23.40%) 1.125E+3 s (16.62%) 1.129E+3 s (25.30%) 9.416E+3 s (34.16%)
𝑛𝑢𝑝𝑑 (𝜅 < 5%) 89.6(12.16%) 74.40 (25.12%) 72.00 (11.41%) 65.40 (12.45%) 93.8 (12.53%)
𝑃̂𝑓 (𝜅 < 5%) 4.092E−2 (7.89%) 4.120E−2 (3.10%) 4.182E−2 (6.21%) 4.077E−2 (4.84%) 4.050E−2 (3.15%)
𝑡(𝜅 < 5%) 1.220E+3 s (12.63%) 1.136E+3 s (28.88%) 1.108E+3 s (16.91%) 1.058E+3 s (17.39%) 1.221E+4 s (23.79%)
𝑃̂𝑓 (𝑛𝑢𝑝𝑑 = 100) 4.188E−2 (4.83%) 4.138E−2 (2.46%) 4.150E−2 (1.20%) 4.101E−2 (1.10%) 4.031E−2 (2.30%)
𝑡(𝑛𝑢𝑝𝑑 = 100) 1.370E+3 s (2.29%) 1.571E+3 s (7.14%) 1.655E+3 s (8.84%) 1.76E+3 s (2.82%) 1.37E+4 s (1.82%)

Table 9 3. Using too few principal components risks failing to estimate


Breakdown of wall-clock time for the linear 100-dimensional problem for the initial
the tail distribution accurately, which does not justify the re-
sample set (𝑛 = 40) and after 150 updates (𝑛 = 190).
duced computational cost. This is because the selected principal
Method KPLS1 KPLS2 KPLS3 KPLS4 OK
components are insufficient to represent the complexity of the
Train time (𝑛 = 40) 328 s 3 s 7 s 12 s 15 s
problem.
Prediction time (𝑛 = 40) 72 s 65 s 71 s 69 s 71 s
Train time (𝑛 = 190) 632 s 42 s 67 s 84 s 74 s 4. KPLS with four principal components generally yields faster con-
Prediction time (𝑛 = 190) 250 s 197 s 213 s 222 s 231 s vergence than OK in terms of the number of function evaluations
and the CPU time. The net effect is that KPLS4 converges to the
required accuracy faster than OK, which is beneficial regardless
of the cost to run a single function evaluation.
time for 106 MCS samples. It can be seen that the training time for OK 5. KPLS offers no significant benefits on low-dimensional problems.
is more expensive than its prediction time for both 𝑛 = 40 and 𝑛 = 190, The time reduction in such problem is not significant since the
which is caused by the high dimensionality of the problem. Results training process of OK itself is already fast. Thus, KPLS is more
also show that the prediction time of OK is similar to that of KPLS, suitable for problems with higher dimensionality, as a rule of
regardless of the number of the principal components. The similarity thumb, in cases with 𝑚 ≥ 40.
makes sense since KPLS primarily reduces the training time and not
the prediction time. However, it is clear here that the time reduction 5. Conclusions and future works
in training time offered by KPLS is substantial. In fact, the reduction in
training time greatly accelerates the process of AK-MCS using KPLS, In this paper, we assess the performance of Kriging with partial
especially KPLS4. The result indicates that reducing the number of least squares (KPLS) in accelerating the active learning process for high-
hyperparameters in Kriging via KPLS greatly affects the training time. dimensional reliability analysis. The KPLS is implemented within the
However, it is worth noting that KPLS4 is the only variant that can AK-MCS strategy with the U-function as the learning criterion, although
ensure a good estimate of failure probability with less training time KPLS can also be combined with other active learning strategies. From
than the OK. the results, we observe that, with an adequate number of principal com-
ponents, KPLS can accelerate AK-MCS in terms of both CPU time and
4.6. Numerical insights and remarks the number of function evaluations to achieve the required accuracy
of failure probability. The results also indicate that AK-MCS benefits
Based on the numerical results presented above, we now present from both faster training and prediction time and the better accuracy
some remarks and insights into the performance of KPLS relative to OK of KPLS in capturing the tail of the distribution when more enrichment
in the context of high-dimensional reliability analysis applied within samples are added. The results of our study show that KPLS with
the AK-MCS framework: four principal components is adequate to ensure that the probability
of failure is accurately estimated. Furthermore, although reducing the
1. Although KPLS is generally faster than OK, the number of prin- number of principal components decreases the hyperparameter training
cipal components needs to be carefully selected. At initial itera- time, it might take longer to reach the required level of accuracy,
tions, KPLS typically yields estimations of 𝑃𝑓 with low accuracy. as compared to when a higher number of principal components are
However, the accuracy increases with more samples added as used. Based on our results, we infer that KPLS is useful in problems
AK-MCS progresses. Our results show that four principal compo- with high-dimensional random spaces, as long as an adequate number
nents achieves the balance between efficient computational time of principal components are used. The advantage of KPLS with four
and achieving the required level of accuracy. principal components is clearly demonstrated in the high-dimensional
2. Using the uncertainty-based stopping criterion leads to slower heat conduction problem, with a speedup factor of around ten times.
convergence of KPLS due to its higher variance of 𝑃̂𝑓 compared One interesting avenue for future research is to study the perfor-
to OK, however, the result shows that the estimated failure mance of KPLS when implemented with other active learning methods.
probability from KPLS (especially KPLS4) is comparable to that A more thorough study on stopping criterion for Kriging-based reliabil-
of OK with same number of function calls (as evidenced by the ity analysis with dimensionality reduction should also be performed.
result for the 10-dimensional bridge truss structure problem). It would also be interesting to modify the KPLS to make it suitable
Furthermore, we found that it becomes much more difficult for problems with extremely small failure probability, which is not the
for the AK-MCS to converge according to the uncertainty-based subject of this paper. Furthermore, the use of PLS itself can be extended
stopping criterion on high-dimensional problems. Due to the to other surrogate models in the context of high-dimensional reliability
aforementioned reasons, a more conventional 𝜅-stopping crite- analysis. The performance of KPLS can also be further improved by
rion based on the relative difference in failure probability and implementing the sparse version of PLS, which is also capable of
the value of the limit state function is adopted for KPLS, but detecting unimportant variables. Finally, although in this paper our
with a stricter threshold. Results show that KPLS, especially that main aim is to study the feasibility of KPLS for reliability analysis, the
with four principal components, converged faster to engineering efficacy of KPLS should be assessed with other dimensionality reduction
accuracy compared to OK by using the 𝜅-stopping criterion. techniques.

12
L.R. Zuhal et al. Reliability Engineering and System Safety 215 (2021) 107848

CRediT authorship contribution statement [19] Yang X, Liu Y, Mi C, Tang C. System reliability analysis through active
learning kriging model with truncated candidate region. Reliab Eng Syst Saf
2018;169:235–41.
Lavi Rizki Zuhal: Conceptualization, Methodology, Formal analy-
[20] Moustapha M, Sudret B, Bourinet J-M, Guillaume B. Quantile-based optimization
sis, Validation, Supervision. Ghifari Adam Faza: Methodology, Soft- under uncertainties using adaptive kriging surrogate models. Struct Multidiscip
ware, Investigation, Writing - original draft. Pramudita Satria Palar: Optim 2016;54(6):1403–21.
Conceptualization, Methodology, Formal analysis, Writing - review & [21] Zhang M, Yao Q, Sheng Z, Hou X. A sequential reliability assessment and
editing. Rhea Patricia Liem: Formal analysis, Methodology, Writing - optimization strategy for multidisciplinary problems with active learning kriging
model. Struct Multidiscip Optim 2020;62(6):2975–94.
review & editing. [22] Moustapha M, Sudret B. Surrogate-assisted reliability-based design optimization:
a survey and a unified modular framework. Struct Multidiscip Optim 2019;1–20.
Declaration of competing interest [23] Marelli S, Sudret B. An active-learning algorithm that combines sparse polyno-
mial chaos expansions and bootstrap for structural reliability analysis. Struct Saf
2018;75:67–74.
The authors declare that they have no known competing finan-
[24] Cheng K, Lu Z. Structural reliability analysis based on ensemble learning of
cial interests or personal relationships that could have appeared to surrogate models. Struct Saf 2020;83:101905.
influence the work reported in this paper. [25] Lelièvre N, Beaurepaire P, Mattrand C, Gayton N. Ak-mcsi: A kriging-based
method to deal with small failure probabilities and time-consuming models.
Acknowledgments Struct Saf 2018;73:1–11.
[26] Xu J, Zhu S. An efficient approach for high-dimensional structural reliability
analysis. Mech Syst Signal Process 2019;122:152–70.
Lavi Rizki Zuhal and Pramudita Satria Palar were funded in part [27] Bourinet J-M, Deheeger F, Lemaire M. Assessing small failure probabilities
through the Penelitian Dasar administered by Direktorat Riset dan by combined subset simulation and support vector machines. Struct Saf
Pengabdian Masyarakat – Direktorat Jenderal Penguatan Riset dan 2011;33(6):343–53.
[28] Forrester A, Sobester A, Keane A. Engineering design via surrogate modelling: a
Pengembangan – Kementerian Riset dan Teknologi/Badan Riset dan
practical guide. John Wiley & Sons; 2008.
Inovasi Nasional, Republik Indonesia. [29] Bouhlel MA, Bartoli N, Otsmane A, Morlier J. Improving kriging surrogates of
high-dimensional design models by partial least squares dimension reduction.
References Struct Multidiscip Optim 2016;53(5):935–52.
[30] Nguyen-Tuong D, Seeger M, Peters J. Model learning with local gaussian process
[1] Hammersley J. Monte carlo methods. Springer Science & Business Media; 2013. regression. Adv Robot 2009;23(15):2015–34.
[2] Melchers RE, Beck AT. Structural reliability analysis and prediction. John wiley [31] van Stein B, Wang H, Kowalczyk W, Bäck T, Emmerich M. Optimally weighted
& sons; 2018. cluster kriging for big data regression. In: International symposium on intelligent
[3] de Angelis M, Patelli E, Beer M. Advanced line sampling for efficient robust data analysis. Springer; 2015, p. 310–21.
reliability analysis. Struct Saf 2015;52:170–82. [32] Moustapha M, Bourinet J-M, Guillaume B, Sudret B. Comparative study of kriging
[4] Au S-K, Beck JL. Estimation of small failure probabilities in high dimensions by and support vector regression for structural engineering applications. ASCE-ASME
subset simulation. Probab Eng Mech 2001;16(4):263–77. J Risk Uncertain Eng Syst A Civ Eng 2018;4(2):04018005.
[5] Hasofer AM, Lind NC. Exact and invariant second-moment code format. J Eng [33] Constantine PG. Active subspaces: emerging ideas for dimension reduction in
Mech Div 1974;100(1):111–21. parameter studies. SIAM; 2015.
[6] Rackwitz R, Flessler B. Structural reliability under combined random load [34] Jiang Z, Li J. High dimensional structural reliability with dimension reduction.
sequences. Comput Struct 1978;9(5):489–94. Struct Saf 2017;69:35–46.
[7] Hurtado JE, Alvarez DA. Neural-network-based reliability analysis: a comparative [35] Zhou Y, Lu Z. An enhanced kriging surrogate modeling technique for
study. Comput Methods Appl Mech Engrg 2001;191(1–2):113–32. high-dimensional problems. Mech Syst Signal Process 2020;140:106687.
[8] Papadopoulos V, Giovanis DG, Lagaros ND, Papadrakakis M. Accelerated subset [36] Zhou T, Peng Y. Kernel principal component analysis-based Gaussian process
simulation with neural networks for reliability analysis. Comput Methods Appl regression modelling for high-dimensional reliability analysis. Comput Struct
Mech Engrg 2012;223:70–80. 2020;241:106358.
[9] Li H-s, Lü Z-z, Yue Z-f. Support vector machine for structural reliability analysis. [37] Wold H. Estimation of principal components and related models by iterative least
Appl Math Mech 2006;27(10):1295–303. squares. Multivariate Anal 1966;391–420.
[10] Bourinet J-M. Rare-event probability estimation with adaptive support vector [38] Amine Bouhlel M, Bartoli N, Regis RG, Otsmane A, Morlier J. Efficient
regression surrogates. Reliab Eng Syst Saf 2016;150:210–21. global optimization for high-dimensional constrained problems by using the
[11] Gaspar B, Teixeira A, Soares CG. Assessment of the efficiency of kriging surrogate kriging models combined with the partial least squares method. Eng Optim
models for structural reliability analysis. Probab Eng Mech 2014;37:24–34. 2018;50(12):2038–53.
[12] Bichon BJ, Eldred MS, Swiler LP, Mahadevan S, McFarland JM. Efficient [39] Byrd RH, Lu P, Nocedal J, Zhu C. A limited memory algorithm for bound
global reliability analysis for nonlinear implicit performance functions. AIAA J constrained optimization. SIAM J Sci Comput 1995;16(5):1190–208.
2008;46(10):2459–68. [40] Ling C, Lu Z, Zhang X. An efficient method based on ak-mcs for estimating
[13] Echard B, Gayton N, Lemaire M. Ak-mcs: an active learning reliability method failure probability function. Reliab Eng Syst Saf 2020;106975.
combining kriging and monte carlo simulation. Struct Saf 2011;33(2):145–54. [41] Razaaly N, Congedo PM. Extension of ak-mcs for the efficient computation of
[14] Sun Z, Wang J, Li R, Tong C. Lif: A new kriging based learning func- very small failure probabilities. Reliab Eng Syst Saf 2020;203:107084.
tion and its application to structural reliability analysis. Reliab Eng Syst Saf [42] Schöbi R, Sudret B, Marelli S. Rare event estimation using polynomial-chaos
2017;157:152–65. kriging. ASCE-ASME J Risk Uncertain Eng Syst A Civ Eng 2017;3(2):D4016002.
[15] Zhang X, Wang L, Sørensen JD. Reif: A novel active-learning function toward [43] Allaix DL, Carbone VI. An improvement of the response surface method. Struct
adaptive kriging surrogate models for structural reliability analysis. Reliab Eng Saf 2011;33(2):165–72.
Syst Saf 2019;185:440–54. [44] Rackwitz R. Reliability analysis—a review and some perspectives. Struct Saf
[16] Fauriat W, Gayton N. Ak-sys: an adaptation of the ak-mcs method for system 2001;23(4):365–95.
reliability. Reliab Eng Syst Saf 2014;123:137–44. [45] Williams CK, Rasmussen CE. Gaussian processes for machine learning, Vol. 2.
[17] Yun W, Lu Z, Zhou Y, Jiang X. Ak-sysi: an improved adaptive kriging model for MIT press Cambridge, MA; 2006.
system reliability analysis with multiple failure modes by a refined u learning [46] Konakli K, Sudret B. Reliability analysis of high-dimensional models using
function. Struct Multidiscip Optim 2019;59(1):263–78. low-rank tensor approximations. Probab Eng Mech 2016;46:18–36.
[18] Wu H, Zhu Z, Du X. System reliability analysis with autocorrelated kriging [47] Li C-C, Der Kiureghian A. Optimal discretization of random fields. J Eng Mech
predictions. J Mech Des 2020;142(10). 1993;119(6):1136–54.

13

You might also like