Professional Documents
Culture Documents
Knowledge-Based Systems
journal homepage: www.elsevier.com/locate/knosys
highlights
article info a b s t r a c t
Article history: Recently, gradient-enhanced surrogate models have drawn extensive attention for function approx-
Received 9 September 2018 imation, in which the gradient information is utilized to improve the surrogate model accuracy. In
Received in revised form 19 June 2019 this work, gradient-enhanced high dimensional model representation (HDMR) is established based
Accepted 28 July 2019
on Bayesian inference technique. The proposed method first assigns Gaussian process prior for the
Available online 31 July 2019
model response function and its partial derivative functions (with respect to all the input variables).
Keywords: Then the auto-covariance functions and the cross-covariance functions of these random processes are
Surrogate model established respectively by the HDMR basis functions. Finally, the posterior distribution of the response
Bayesian inference function is analytically obtained through Bayes theorem. The proposed method combines the sample
Gaussian process information and gradient information in a seamless way to yield a highly accurate HDMR prediction
High dimensional model representation model. We demonstrate our method via several examples, and the results all suggest that combining
gradient information with sample information provides more accurate prediction results at reduced
computational cost.
© 2019 Elsevier B.V. All rights reserved.
https://doi.org/10.1016/j.knosys.2019.104903
0950-7051/© 2019 Elsevier B.V. All rights reserved.
2 K. Cheng, Z. Lu and K. Chaozhang / Knowledge-Based Systems 184 (2019) 104903
anchor points in the parameter space, and the Lagrange or lin- where
ear interpolation method is utilized to estimate the component g0 = E [g(x)] ,
functions. Rabitz et al. [37] used equidistantly distributed sam- gk1 (xk1 ) = E [g(x)|xk1 ] − f0 ,
ple points along each axis of the input parameter to develop gk1 k2 (xk1 , xk2 ) = E [g(x)|xk1 , xk2 ] − gk1 (xk1 ) − gk2 (xk2 ) − g0 . (2)
Cut-HDMR. Liu et al. [27] suggested using non-uniform optimal ..
nodes to mitigate the Runge’s phenomenon. The optimal nodes .
are selected as the nodes of Legendre–Gauss, Chebyshev—Gauss In Eq. (1), g0 indicates a constant term denoting the 0th order
and Clenshaw–Curtis quadratures. ANOVA-HDMR, also known effect to g(x). The component functions gk1 (xk1 )(k1 = 1, . . . , n)
as random sampling (RS)-HDMR, expands the component func- denote the first-order term expressing the individual effect of
tions in terms of analytical basis functions, and the Monte Carlo input parameters xk1 on the output response g(x). The compo-
simulations or other technique is utilized to estimate the basis nent functions gk1 k2 (xk1 , xk2 )(1 ≤ k1 ≤ k2 ≤ n) indicate the
function coefficients. Li et al. [36] suggested using orthogonal second-order term describing the interactive effects of the input
polynomials, cubic B-splines and polynomial to approximate the parameters xk1 and xk2 upon the output response. Similarly, the
ANOVA-HDMR component functions. Luo et al. [34] used the high-order component functions gu (xu )(u ∈ [1, . . . , n]) denote
reproducing kernel technique to estimate the arbitrary order the cooperative effects of a set of parameters xu . Experience
ANOVA-HDMR component functions. Lambert et al. [38] utilized shows that the high order interactions among input parameters
the group method of data handling (GMDH) method to con- in Eq. (1) can be negligible so that the HDMR can be truncated to
struct the ANOVA-HDMR surrogate model based on Legendre include terms up to second order components [30,34], namely,
polynomial basis functions. n
∑ ∑
This work aims at developing the ANOVA-HDMR surrogate g(x) = g0 + gk1 (xk1 ) + gk1 k2 (xk1 , xk2 ). (3)
model integrating with gradient information based on Bayesian k1 =1 1≤k1 ≤k2 ≤n
inference technique, namely, gradient-enhanced HDMR
The total number of summands in Eq. (3) is 1 + n + n(n − 1)/2,
(GE-HDMR). Firstly, the response function of an underlying model
and the component functions in Eq. (3) can be approximated
is approximated by the HDMR component functions with un-
as [38]:
known coefficients. Then we assign the GP prior for the HDMR
d
model and its partial derivative function. The auto-covariance
αr(k1 ) ϕr (xk1 ),
∑
functions and the cross-covariance functions of these GP are gk1 (xk1 ) ≈
defined by the HDMR component functions respectively. Given a r =1
l m
set of samples (response information and gradient information), ∑ ∑ (4)
gk1 k2 (xk1 , xk2 ) ≈ βpq
(k1 k2 )
ϕp (xk1 )ϕq (xk2 ),
the posterior distribution of the model response can be computed
p=1 q=1
by means of Bayes theorem, and the GE-HDMR surrogate model
..
is analytically derived by its mean function. The proposed method .
combines the sample information and gradient information to
where {ϕp }p∈N∗ (N∗ = 1, 2, . . .) indicate the orthogonal polyno-
approximate the response function, thus it provides much more (k ) (k k )
accurate prediction result compared to classic HDMR model with mials basis functions, αr 1 and βpq1 2 represent the basis func-
no gradient information. Several examples are used to validate tion coefficients, while d, l and m are the predefined polynomial
the accuracy and efficiency of the presented method, results orders of bases.
demonstrate that the developed GE-HDMR surrogate model is
3. Gradient-enhanced HDMR based on Bayesian inference
much more efficient and accurate than the classic HDMR model.
The rest of this paper is organized as follows: Section 2 re-
3.1. Bayesian inference method
views the basic theories of classic HDMR. The construction of the
GE-HDMR surrogate model is introduced in detail in Section 3.
In the Bayesian framework, the crux is to select an appropriate
In Section 4, several tested examples are used to illustrate the
prior for the model output g(x). In general, the GP is always
performance of the proposed GE-HDMR method. Finally, some chosen as the prior for g(x)
conclusions are drawn in Section 5. { [2]. Given }the T
training samples set
{D = (X , Y )}, where X = x(1) , . . . , x(N ) is the input data, Y =
}T
g(x(1) ), . . . , g(x(N ) ) is the corresponding model response, N is
{
2. High dimensional model representation the size of the training samples set, the joint distribution of Y is
Gaussian under the GP prior hypothesis, i.e., Y ∼ N (µY , K ), and
High-dimensional function estimation faces with the so-called its joint probability density function (PDF) f (Y ) can be defined as
‘‘curse of dimensionality’’ as the sample size needed to approxi-
mate the function to a satisfying accuracy level increases expo-
( )
1
nentially with the dimensionality of the function [34]. However, f (Y ) = (2π )−N /2 |K |−1/2 exp − (Y − µY )T K −1 (Y − µY ) , (5)
2
HDMR provides a remarkable way to overcome this predicament
by approximating high-dimensional function with a sum of low- where µY and K is the mean vector and covariance matrix of Y
dimensional functions [27,30]. Considering a square-integrable respectively. Generally, the covariance matrix K is conveniently
response function y = g(x), n-dimensional independent input pa- chosen to be the squared exponential covariance function:
rameters are gathered in vector x. HDMR expresses the response (i) (j)
{ n
}
(i) (j) 1 ∑ (xd − xd )2
Kij = Cov g(x ), g(x ) = exp − ,
( )
function f (x) as finite components in terms of input parameters (6)
as
2
d=1
wd2
n
∑ ∑ where w = (w1 , . . . , wn ) is the hyper-parameter vector of the
g(x) = g0 + gk1 (xk1 ) + gk1 k2 (xk1 , xk2 ) + · · · covariance function. Under the GP prior assumption, the posterior
k1 =1 1≤k1 ≤k2 ≤n
(1) distribution of the model output is also a GP which can be
+ g1,2,...,n (x1 , . . . , xn ), obtained analytically by means of the Bayes theorem. The analytic
K. Cheng, Z. Lu and K. Chaozhang / Knowledge-Based Systems 184 (2019) 104903 3
construction of the posterior distribution of the model output covariance function in Eq. (9) as
forms the state-of-the-art of GP-based surrogate modeling [2].
∂ g(x(j) ) ∂ K (x(i) , x(j) )
[
]
In next subsection, we provided the formulation for developing (i) (j) (i)
C (x , x ) = Cov g(x ), =
GE-HDMR by Bayesian inference technique. ∂ xi1 ∂ xi 1
n d
∂ϕr (x(kj1) )
ϕr (x(ki1) )
∑ ∑
=
k1 =1 r =1
∂ x(i1j)
3.2. Gradient-enhanced HDMR
l m
∂ϕp (x(kj1) )ϕq (x(kj2) )
ϕp (x(ki1) )ϕq (x(ki2) )
∑ ∑ ∑
+ .
Following the procedure in Section 2, HDMR can be approxi- 1≤k1 ≤k2 ≤n p=1 q=1
∂ x(i1j)
mated as
(11)
n d
(N ) T
αr(k1 ) ϕr (xk1 ) For N realizations X = x(1) , . . . , x
∑ ∑ { }
g(x) ≈ g0 + of the random input
}T
vector, the response vector Y = g(x(1) ), . . . , g(x(N ) ) is com-
{
k1 =1 r =1
(7)
∑ l
∑ m
∑ puted by deterministic solver at these points. In addition, the gra-
+ β (k1 k2 )
pq ϕ
p (xk1 ) q (xk2 )ϕ . dient information Y∂ =
∂ g(x(1) ) (1 ) ) (N ) ) (N ) )
( )
1≤k1 ≤k2 ≤n p=1 q=1 ∂x
, . . . , ∂ g(x
∂x n
, . . . , ∂ g(x
∂x
, . . . , ∂ g(x
∂x n
can be obtained ef-
1 1
[ ] ficiently via direct or adjoint method [7,40]. Therefore, the joint
∂ g(x)
Thus the gradient g∂ (x) = ∂ x1
, . . . , ∂∂g(x)
xn
of g(x) can be distribution of Y , Y ∂ and g(x) for an untried site x in the param-
obtained as eter space is
Table 1
Eight benchmark examples.
ID Expression Variable space
5
( 5 )0.2
∑ ∏
F1 g(x) = (5 ln(xi )2 ) − xi xi ∼ N(5, 1)
i=1 i=1
10
x2i
∑ ( ( ))
F2 g(x) = xi ci + ln xi ∼ N(5, 2)
i=1
+ ... + x21 x210
10
[( 10 )]
∑ ∑
F3 g(x) = exp(xi ) ci + xi − ln exp(xi ) xi ∼ N(0, 0.5)
i=1 i=1
For F2 and F3: c1≤i≤10 = [−6.089 − 17.164 − 34.054 − 5.914 − 24.721 − 14.986 − 24.1 − 10.708 − 26.662 − 22.179]
µ(x) can be expressed as In this paper, the mean value of the response function is
(i)
estimated by sample mean value, namely, g0 = µf = µY =
Y − µY
[ ][ ]
K1 K2 ∑N (i)
µ(x) = µg + K (x, X ) C (x, X ) i=1 g(x )/N. Thus µ(x) can be simplified as
[ ]
K3 K4 Y∂
= µg + K (x, X ) [K 1 (Y − µY ) + K 2 Y ∂ ] n d
+ C (x, X ) [K 1 (Y − µY ) + K 2 Y ∂ ] αr(k1 ) ϕr (xk1 )
∑ ∑
µ(x) = g0 +
= µg + K (x, X )ω1 + C (x, X )ω2 k1 =1 r =1
(19)
l m
⎛
N n d ∑ ∑ ∑
ω1(i) ⎝ ϕr (x(ki1) )ϕr (xk1 ) + β ϕ
(k1 k2 )
ϕ
p (xk1 ) q (xk2 ) + ···
∑ ∑ ∑
= µg + pq
1≤k1 ≤k2 ≤n p=1 q=1
i=1 k1 =1 r =1
⎞
m l
∑ ∑∑ (i) (i) where
+ ϕp (xk1 )ϕq (xk2 )ϕp (xk1 )ϕq (xk2 ) + · · ·⎠
1≤k1 ≤k2 ≤n p=1 q=1
N N
∂ϕr (x(ki1) )
αr(k1 ) = ω1(i) ϕr (x(ki1) ) + ω2(i)
⎛ ∑ ∑
N n d
∂ϕr (x(ki) )
ω2(i) ⎝ ∂ x(i1j)
∑ ∑ ∑
+ (j)
1
ϕr (xk1 ) i=1 i=1
∂ xi1 (20)
i=1 k1 =1 r =1 N
∑ (i) (i) (i)
N
∑ (i)
∂ϕp (x(ki1) )ϕq (x(ki2) )
⎞ β (k1 k2 )
= ω1 ϕp (xk1 )ϕq (xk2 ) + ω2
∑ l
∑ m
∑ ∂ϕp (x(ki) )ϕq (x(ki) ) pq
∂ x(i1j)
+ 1 2
ϕp (xk1 )ϕq (xk2 ) + · · ·⎠ i=1 i=1
[ From Eq. (14), ] it follows that the prediction gradient µ∂ (x) = Table 2
∂µ(x) ∂µ(x) The comparisons of the surrogate model errors of Function 1.
∂ x1
, . . . , ∂ xn
can be expressed as
Function 1 GE-HDMR HDMR
Fig. 1. Comparison of errors between GE-HDMR and HDMR meta-model for function 1–4.
Fig. 2. Comparison of errors between GE-HDMR and HDMR meta-model for function 5–8.
Table 4 Table 5
The comparisons of the surrogate model errors of Function 3. The comparisons of the surrogate model errors of Function 4.
Function 3 GE-HDMR HDMR Function 4 GE-HDMR HDMR
Sample size RRMSE RMAE RRMSE RMAE Sample size RRMSE RMAE RRMSE RMAE
50 0.069 0.590 0.783 4.711 10 0.0144 0.1157 0.4941 2.1779
100 0.019 0.212 0.169 1.722 20 6.19×10−13 6.85×10−12 0.1823 1.0847
150 0.014 0.116 0.126 1.417 30 1.34×10−14 9.09×10−14 0.0181 0.1157
200 0.012 0.070 0.110 0.854 40 1.11×10−14 7.99×10−14 0.0142 0.1085
250 0.012 0.066 0.099 0.744 50 1.02×10−14 5.17×10−14 0.0139 0.0899
to higher computational cost than classic HDMR model. How- listed in Table 10 for each test example. In Figs. 3 and 4, the box-
ever, compared to classic HDMR, the presented method requires plots are provided to show the robustness of the two performance
much fewer true model evaluations to obtain an accurate surro- metrics of each test example, where 1 and 2 in x-axis represent
gate model, which saves much computational cost since single classic HDMR and GE-HDMR model respectively, and here RRMSE
model evaluations of a complex engineering model is usually very and RMAE are plotted in log10 scale. From Figs. 3 and 4, one
expensive. can conclude that both the two method provide relative robust
In addition, we perform the presented GE-HDMR model and results for RRMSE, but variability of RMAE is relative large. Since
the classic HDMR model for 50 times with the same sample size RRMSE measures the overall accuracy of surrogate model, thus
K. Cheng, Z. Lu and K. Chaozhang / Knowledge-Based Systems 184 (2019) 104903 7
Fig. 3. Comparison of robustness between GE-HDMR and HDMR meta-model for function 1–4.
Fig. 4. Comparison of robustness between GE-HDMR and HDMR meta-model for function 5–8.
Table 6 Table 8
The comparisons of the surrogate model errors of Function 5. The comparisons of the surrogate model errors of Function 7.
Function 5 GE-HDMR HDMR Function 7 GE-HDMR HDMR
Sample size RRMSE RMAE RRMSE RMAE Sample size RRMSE RMAE RRMSE RMAE
50 0.096 1.529 1.172 4.342 30 0.00149 0.02530 0.1360 1.0700
100 0.022 0.349 0.408 2.330 60 0.00111 0.02300 0.0191 0.1830
150 0.021 0.331 0.278 1.846 90 0.00078 0.01450 0.0137 0.0930
200 0.014 0.167 0.263 1.487 120 0.00064 0.00610 0.0122 0.1050
250 0.009 0.102 0.241 1.405 150 0.00065 0.00520 0.0108 0.0746
Table 7 Table 9
The comparisons of the surrogate model errors of Function 6. The comparisons of the surrogate model errors of Function 8.
Function 6 GE-HDMR HDMR Function 8 GE-HDMR HDMR
Sample size RRMSE RMAE RRMSE RMAE Sample size RRMSE RMAE RRMSE RMAE
20 0.00021 0.0105 0.0038 0.043 20 0.00577 0.08259 0.12745 0.76794
40 7.31×10−5 0.0060 0.0035 0.064 40 0.00329 0.05478 0.10428 0.56946
60 6.18×10−5 0.0055 0.00015 0.024 60 0.00224 0.05280 0.09820 0.50323
80 4.43×10−5 0.0054 5.92×10−5 0.0125 80 0.00161 0.05010 0.02589 0.23139
100 4.36×10−5 0.0044 5.42×10−5 0.0134 100 0.00069 0.03802 0.01977 0.14373
8 K. Cheng, Z. Lu and K. Chaozhang / Knowledge-Based Systems 184 (2019) 104903
Table 10 References
The comparisons of the computational time of Function 1–8.
Function ID GE-HDMR time HDMR time Sample size [1] H. Liu, Y.-S. Ong, J. Cai, A survey of adaptive sampling for global metamod-
1 2.21 1.72 s 100 eling in support of simulation-based complex engineering design, Struct.
2 9.18 s 1.06 s 100 Multidiscip. Optim. 57 (2018) 393–416.
3 35.96 s 1.47 s 250 [2] C. Rasmussen, C. Williams, Gaussian Processes for Machine Learning, MIT
4 1.87 s 0.96 s 50 Press, 2006.
5 82.91 s 5.58 s 250 [3] L.L. Gratiet, Multi-Fidelity Gaussian Process Regression for Computer
6 2.22 s 1.70 s 100 Experiments, 2013.
7 6.84 s 4.17 s 150 [4] L. Parussini, D. Venturi, P. Perdikaris, G.E. Karniadakis, Multi-fidelity Gaus-
8 5.76 s 1.45 s 100 sian process regression for prediction of random fields, J. Comput. Phys.
336 (2017) 36–50.
[5] J.P.C. Kleijnen, Regression and Kriging metamodels with their experimental
designs in simulation: A review, European J. Oper. Res. 256 (2017) 1–16.
[6] A. Melkumyan, F. Ramos, Multi-kernel Gaussian processes, in: International
it is relative stable than RMAE, which measures the maximum Joint Conference on Ijcai, 2009.
absolute error. Also, it is observed that the provided GE-HDMR [7] Z.-H. Han, S. Görtz, R. Zimmermann, Improving variable-fidelity surrogate
model provides more robust results of RRMSE for most of the modeling via gradient-enhanced kriging and a generalized hybrid bridge
test example, which demonstrate the robustness of the presented function, Aerosp. Sci. Technol. 25 (2013) 177–189.
[8] J.M. Bourinet, F. Deheeger, M. Lemaire, Assessing small failure probabilities
method. by combined subset simulation and support vector machines, Struct. Saf.
On the whole, the proposed method in this paper is an efficient 33 (2011) 343–353.
and robust way to improve the accuracy of HDMR meta-model by [9] X. Zhu, Z. Gao, An efficient gradient-based model selection algorithm for
integrating the gradient information. multi-output least-squares support vector regression machines, Pattern
Recognit. Lett. 111 (2018) 16–22.
[10] K. Cheng, Z. Lu, Y. Zhou, Y. Shi, Y. Wei, Global sensitivity analysis using
5. Conclusions support vector regression, Appl. Math. Model. (2017).
[11] P. Tsirikoglou, S. Abraham, F. Contino, C. Lacor, G. Ghorbaniasl, A hyperpa-
In this paper, we investigated high dimensional model rep- rameters selection technique for support vector regression models, Appl.
Soft Comput. 61 (2017) 139–148.
resentation surrogate model when gradient information of a re-
[12] K. Cheng, Z. Lu, Y. Wei, Y. Shi, Y. Zhou, Mixed kernel function support
sponse function is present. Assuming the response function and vector regression for global sensitivity analysis, Mech. Syst. Signal Process.
its partial derivative functions are all Gaussian Processes, we 96 (2017) 201–214.
developed the auto-covariance functions and cross-covariance [13] K. Cheng, Z. Lu, Adaptive sparse polynomial chaos expansions for global
sensitivity analysis based on support vector regression, Comput. Struct.
functions of all these random process. Then the analytical ex-
194 (2018) 86–96.
pression of the GE-HDMR surrogate model is derived from the [14] X. Zhou, T. Jiang, An effective way to integrate ε -support vector regression
joint distribution of samples and gradients. The proposed method with gradients, Expert Syst. Appl. 99 (2018) 126–140.
combines the sample information and the gradient information as [15] T. Jiang, X. Zhou, Gradient/Hessian-enhanced least square support vector
a whole to make a more accurate prediction. Thus the GE-HDMR regression, Inform. Process. Lett. 134 (2018) 1–8.
[16] Q. Zhou, Y. Wang, P. Jiang, X. Shao, S.-K. Choi, J. Hu, et al., An active
surrogate model is promising for function approximation when learning radial basis function modeling method based on self-organization
gradient information can be easily obtained. maps for simulation-based design problems, Knowl.-Based Syst. 131 (2017)
Eight benchmark examples are used to validate the perfor- 10–27.
mance of the GE-HDMR surrogate model, and the results all [17] R. Schaback, Error estimates and condition numbers for radial basis
function interpolation, Adv. Comput. Math. 3 (1995) 251–264.
suggest that the proposed method is an effective way to inte- [18] M. Björkman, K. Holmström, Global optimization of costly nonconvex
grate gradient information to improve the accuracy of HDMR functions using radial basis functions, Opt. Eng. 1 (4) (2000) 373–397.
meta-model. [19] G. Blatman, B. Sudret, Adaptive sparse polynomial chaos expansion based
Although the presented method improve the accuracy of clas- on least angle regression, J. Comput. Phys. 230 (2011) 2345–2367.
[20] V. Keshavarzzadeh, R.G. Ghanem, S.F. Masri, O.J. Aldraihem, Convergence
sic HDMR model, it usually requires more training time to ob-
acceleration of polynomial chaos solutions via sequence transformation,
tain a accurate surrogate model. Also, the presented method Comput. Methods Appl. Mech. Engrg. 271 (2014) 167–184.
leads to a full HDMR model. However, recent study have shown [21] G. Blatman, B. Sudret, Efficient computation of global sensitivity indices
that a sparse representation of HDMR model could improve its using sparse polynomial chaos expansions, Reliab. Eng. Syst. Saf. 95 (2010)
performance [43]. Thus the future work concentrates on reduc- 1216–1229.
[22] S. Salehi, M. Raisee, M.J. Cervantes, A. Nourbakhsh, An efficient multifidelity
ing the computational cost of training GE-HDMR model as well ℓ1-minimization method for sparse polynomial chaos, Comput. Methods
as developing efficient algorithm to construct sparse GE-HDMR Appl. Mech. Engrg. 334 (2018) 183–207.
model. [23] B. Sudret, Global sensitivity analysis using polynomial chaos expansions,
Reliab. Eng. Syst. Saf. 93 (2008) 964–979.
[24] V. Papadopoulos, D.G. Giovanis, N.D. Lagaros, M. Papadrakakis, Accelerated
Declaration of competing interest subset simulation with neural networks for reliability analysis, Comput.
Methods Appl. Mech. Engrg. 223–224 (2012) 70–80.
No author associated with this paper has disclosed any po- [25] W. Hao, Z. Lu, P. Wei, J. Feng, B. Wang, A new method on ANN for variance
tential or pertinent conflicts which may be perceived to have based importance measure analysis of correlated input variables, Struct.
Saf. 38 (2012) 56–63.
impending conflict with this work. For full disclosure statements
[26] M. Marseguerra, R. Masini, E. Zio, G. Cojazzi, Variance decomposition-based
refer to https://doi.org/10.1016/j.knosys.2019.104903. sensitivity analysis via neural networks, Reliab. Eng. Syst. Saf. 79 (2003)
229–238.
Acknowledgments [27] Y. Liu, M. Yousuff Hussaini, G. Ökten, Accurate construction of high
dimensional model representation with applications to uncertainty
quantification, Reliab. Eng. Syst. Saf. 152 (2016) 281–295.
The authors would like to express the gratitude to three re- [28] X. Ma, N. Zabaras, An adaptive high-dimensional stochastic model rep-
viewers for helpful comments and constructive suggestions. This resentation technique for the solution of stochastic partial differential
work was supported by the National Natural Science Founda- equations, J. Comput. Phys. 229 (2010) 3884–3915.
tion of China (Grant No. NSFC 51775439), National Science and [29] H. Rabitz, ÖF. Aliş, General foundations of high-dimensional model
representations, J. Math. Chem. 25 (1999) 197–233.
Technology Major Project (2017-IV-0009-0046) and ‘‘Innovation [30] E. Li, H. Wang, G. Li, High dimensional model representation (HDMR)
Foundation for Doctor Dissertation of Northwestern Polytechnical coupled intelligent sampling strategy for nonlinear problems, Comput.
University’’ with project code of CX201933. Phys. Comm. 183 (2012) 1947–1955.
K. Cheng, Z. Lu and K. Chaozhang / Knowledge-Based Systems 184 (2019) 104903 9
[31] Genyuan Li, A. Carey Rosenthal, Herschel Rabitz, High dimensional model [39] L. Laurent, R.L. Riche, B. Soulier, P.A. Boucard, An overview of gradient-
representations, J. Phys. Chem. A 105 (2001) 7765–7777. enhanced metamodels with applications, Arch. Comput. Methods Eng.
[32] G. Li, S.W. Wang, High dimensional model representations generated from (2017) 1–46.
low dimensional data samples, I. mp-Cut-HDMR, J. Math. Chem. 30 (2001) [40] J. Peng, J. Hampton, A. Doostan, On polynomial chaos expansion via
1–30. gradient-enhanced ℓ 1 -minimization, J. Comput. Phys. 310 (2016)
[33] G. Li, J. Hu, S.W. Wang, P.G. Georgopoulos, J. Schoendorf, H. Rabitz, 440–458.
Random sampling-high dimensional model representation (RS-HDMR) and [41] X. Cai, H. Qiu, L. Gao, P. Yang, X. Shao, An enhanced RBF-HDMR integrated
orthogonality of its different order component functions, J. Phys. Chem. A with an adaptive sampling method for approximating high dimensional
110 (2006) 2474–2485. problems in engineering design, Struct. Multidiscip. Optim. 53 (2016)
[34] X. Luo, Z. Lu, X. Xu, Reproducing kernel technique for high dimen- 1209–1229.
sional model representations (HDMR), Comput. Phys. Comm. 185 (2014) [42] H. Liu, J.-R. Hervas, Y.-S. Ong, J. Cai, Y. Wang, An adaptive RBF-HDMR mod-
3099–3108. eling approach under limited computational budget, Struct. Multidiscip.
[35] I.M. Sobol, Theorems and examples on high dimensional model Optim. 57 (2018) 1233–1250.
representation, Reliab. Eng. Syst. Saf. 79 (2003) 187–193. [43] R.S.C. Lambert, F. Lemke, S.S. Kucherenko, S. Song, N. Shah, Global
[36] Genyuan Li, A.Shengwei Wang, Herschel Rabitz, Practical approaches to sensitivity analysis using sparse high dimensional model representa-
construct RS-HDMR component functions, J. Phys. Chem. A 106 (2002) tions generated by the group method of data handling, Math. Comput.
8721–8733. Simulation 128 (2016) 42–54.
[37] H. Rabitz, Ö.F. Aliş, J. Shorter, K. Shim, Efficient input—output model
representations, Comput. Phys. Commun. 117 (1999) 11–20.
[38] R.S.C. Lambert, F. Lemke, S.S. Kucherenko, S. Song, N. Shah, Global
sensitivity analysis using sparse high dimensional model representations
generated by the group method of data handling, Math. Comput. Simul.
128 (2016) 42–54.