You are on page 1of 19

archives of civil and mechanical engineering 18 (2018) 592–610

Available online at www.sciencedirect.com

ScienceDirect

journal homepage: http://www.elsevier.com/locate/acme

Original Research Article

Simplified reliability analysis of multi hazard risk in


gravity dams via machine learning techniques

Mohammad Amin Hariri-Ardebili a,*, Farhad Pourkamali-Anaraki b


a
Department of Civil Environmental and Architectural Engineering, University of Colorado Boulder, 428 UCB, Boulder,
CO 80309, USA
b
Department of Applied Mathematics, University of Colorado Boulder, 526 UCB, Boulder, CO 80309, USA

article info abstract

Article history: Deterministic analysis does not provide a comprehensive model for concrete dam response
Received 9 February 2017 under multi-hazard risk. Thus, the use of probabilistic approach is usually recommended
Accepted 3 September 2017 which is problematic due to high computational demand. This paper presents a simplified
Available online reliability analysis framework for gravity dams subjected to flooding, earthquakes, and
aging. A group of time-variant degradation models are proposed for different random
Keywords: variables. Response of the dam is presented by explicit limit state functions. The probability
Concrete dams of failure is directly computed by either classical Monte Carlo simulation or the refined
Classification importance sampling technique. Next, three machine learning techniques (i.e., K-nearest
Seismic neighbor, support vector machine, and naive Bayes classifier) are adopted for binary
Temporal degradation classification of the structural results. These methods are then demonstrated in terms of
Hydrological hazard accuracy, applicability and computational time for prediction of the failure probability.
Results are then generalized for different dam classes (based on the height-to-width ratio),
various water levels, earthquake intensity, degradation rate, and cross-correlation between
the random variables. Finally, a sigmoid-type function is proposed for analytical calculation
of the failure probability for different classes of gravity dams. This function is then
specialized for the hydrological hazard and the failure surface is presented as a direct
function of the dam's height and width.
© 2017 Politechnika Wrocławska. Published by Elsevier Sp. z o.o. All rights reserved.

and industrial use. However, disasters such as human


1. Introduction operating errors and natural hazards affect performance of
dams. A dam's potential failure mode (PFM) is a chain of events
Concrete dams are part of the nations infrastructures which that lead to unsatisfactory performance, which could result in
can be used for generating energy, impounding water, uncontrolled water release [1]. There are several well-known
suppressing floods and activities such as irrigation, navigation PFMs for concrete dams such as sliding, overturning, over-

* Corresponding author.
E-mail addresses: mohammad.haririardebili@colorado.edu (M.A. Hariri-Ardebili), farhad.pourkamali@colorado.edu (F. Pourkamali-
Anaraki).
http://dx.doi.org/10.1016/j.acme.2017.09.003
1644-9665/© 2017 Politechnika Wrocławska. Published by Elsevier Sp. z o.o. All rights reserved.
archives of civil and mechanical engineering 18 (2018) 592–610 593

stressing, cracking, and foundation failure [2]. Failures start tion techniques, together with reliability analysis. Yu [11]
with some initiating event that causes an adverse change in introduced the first finite element based time-variant reliabil-
the structure. The initiators can be classified into the following ity assessment of gravity dams. He used stochastic ground
main groups: motions along with FORM approximation and Koo's analytical
solution to compute the mean up-crossing rate of a given
 Hydrological events (e.g., floods and an increase of water performance function. Krounis [12] studied the sliding stability
flow through the spillway). and failure probability of concrete dams with bonded
 Static events (e.g., reservoir water load, ice load, equipment concrete-rock interfaces. The heterogeneous properties of
failure). the interface joint were considered with different spatially
 Time-dependent events (e.g., erosion, alkali-aggregate reac- correlation lengths.
tion in concrete, increased seepage, clogged drain, degraded Nearly all prior studies on reliability assessment of concrete
grout curtain). gravity dams are limited to single-hazard analysis, constant
 Seismic events (e.g., earthquake load). pool level, and random concrete material capacity (e.g., friction
 Other events (e.g., human operating errors, fire, landslides into angle and cohesion). The existing simplified models do not
the reservoir, vehicular impact, sabotage, and vandalism). take into account the seismic or aging hazards. The simula-
tions were performed with crude MCS and the findings are
There are many concrete dams around the world that are usually limited to a specific case study (not a generic dam
entering middle age [3]. Over one-third of the United States model) subjected to a non-intensifying hazard. Time-depen-
dams are already fifty years old and in another ten years, nearly dency of the structural reliability was only discussed in the
70% of dams in the United States will have reached the half- theoretical level without any realistic example.
century mark [4]. Thus, predicting their long term behavior,
service life, and quantifying the failure probability is necessary. 1.1.2. Machine learning for reliability analysis of concrete
Detailed deterministic numerical analysis of concrete dams dams
with all sources of nonlinearities is computationally expensive Machine learning techniques are basically used either for
[5]. Moreover, it does not account for the uncertainties ‘‘regression’’ or ‘‘classification’’ purposes. The former one is
associated with the structure itself (epistemic) and the applied mainly used to forecast the future response (dam safety
loads (aleatory). Thus, the primary goal of this paper is to monitoring) based on the collected past information (dam
provide a simplified probabilistic framework to determine the instrumentation). Also, different branches of machine learn-
failure probability in gravity dams subjected to multi-hazard ing are adopted for the so-called ‘‘back analysis’’ and
risks. The secondary goal is to employ several machine determination of mechanical properties for concrete materi-
learning techniques in order to facilitate classification of the als. This line of work considers the prediction of continuous-
results and estimate the reliability index. Both these tasks are time variables (regression) based on ‘‘training’’ data sets. For
achieved by using explicit limit state functions. Finally, these example, Salazar et al. [13,14] discussed and contrasted some
results are generalized for different gravity dam classes based of the machine learning based predictive models for dam
on the height-to-width ratio. safety assessment, i.e., random forests, boosted regression
trees, neural network (NN), support vector machine (SVM), and
1.1. Literature review multivariate adaptive regression splines. The prediction
accuracy in each case was compared with the conventional
1.1.1. Reliability analysis of concrete dams statistical model. Saouma et al. [15] used stepwise linear
Bury and Kreuzer [6] calculated the sliding failure probability of regression and K-nearest neighbor local polynomial techni-
gravity dams based on rigid body analysis. Gumbel distribu- ques for prediction of arch dam responses based on pendulum
tional model was assumed for both the annual peak flood and recordings. The results of statistical analysis were compared
the ground acceleration. Saouma [7] combined the concept of with nonlinear finite element method.
reliability index with fracture mechanics and determined the The latter group of machine learning techniques have been
safety index of working dams based on the nonlinear finite used for classification of dam responses. The research in this
element analysis. Four random variables were considered: field is very limited and there is a gap between all the
reservoir elevation, fracture toughness, cohesion, and friction theoretical aspects and the real world dam engineering
angle. Carvajal et al. [8] performed reliability analysis of a applications. Gaspar et al. [16] proposed a probabilistic thermal
gravity roller compact concrete (RCC) dam by Monte Carlo model to propagate uncertainties on some RCC's physical
simulation (MCS) and the first order reliability method (FORM). properties. A thermo-chemo-mechanical model was then
Statistical analysis of RCC density, analysis of scatter at used to describe the RCC behavior. Moreover, the global
different spatial scales, data unification, and a physical sensitivity analysis was performed to evaluate the impact of
formulation of the RCC intrinsic curve were all considered. random variables. Mata et al. [17] proposed a method based on
Sliding and cracking were considered as limit state functions. linear discriminant models for the construction of decision
Also, Peyras et al. [9] proposed a combined method of risk, rules for the early detection of developing failure scenarios.
reliability analysis, and event tree methods for safety analysis They developed a single classification index by combining the
of concrete dams. Altarejos-Garcia et al. [10] proposed a physical measured quantities.
methodology to improve the estimation of the conditional Thus, there is an urgent need for comprehensive research
probability of responses in gravity dam-reservoir systems by on the failure probability estimation of concrete dams by using
using complex behavior models based on numerical simula- machine learning techniques. For example, by exploring
594 archives of civil and mechanical engineering 18 (2018) 592–610

different hazard models and assessing their correlations, it is


8
possible to obtain a multi-hazard reliability model. < G ¼ RS deterministic
GðXÞ ¼ RðXÞSðXÞ time-invariant probabilistic
:
1.2. Contribution GðXðtÞÞ ¼ RðXðtÞÞSðXðtÞÞ time-dependent probabilistic
(1)
Detailed deterministic numerical analysis of concrete dams
where R is resistance (i.e., capacity), S is stress (i.e., applied load
with all sources of nonlinearities is computationally expensive
or demand), t refers to time, and X 2 RM is a random vector of N
[5] and does not account for the uncertainty sources dealing
basic variables X = X1, X2, . . ., XN. G  0 corresponds to failure,
with the structure itself (epistemic) and the applied loads
while G > 0 presents the safe region.
(aleatory). Thus, this paper aims to provide a simplified
probabilistic framework in order to determine the failure 2.1. Time-invariant case
probability for gravity dams subjected to multi-hazard risk.
The major contributions can be itemized as: In ‘‘time-invariant’’ structural reliability problems, it is
assumed that all the parameters are either deterministic or
 Multi-hazard analysis of gravity dams are performed for the random variables. They are usually presented by probability
first time in the context of a single paper. distributional models showing the uncertainty over a given
 Aging hazards of gravity dams are studied for the first time time interval. In addition, they can be combined to account for
with up to six degrading/increasing models for random multiple effects on variables [18]. In the time-invariant model,
variables. the resulting formulation seeks the probability that the
 A comprehensive study on the temporal reliability of gravity uncertain demand exceeds the uncertain capacity at least
dams are performed considering both instant and cumula- once during the selected time period.
tive failure probabilities. The time-invariant failure probability, Pf, can be written as
 For all the studied multi-hazards, the failure probability is (the convolution integral), Fig. 1(a):
presented for the intensifying hazard which eventually
forms a so-called fragility function. Z 1
 Comparison of three machine learning techniques for Pf ¼ P½RðXÞ < SðXÞ ¼ FR ðdÞf S ðdÞdd
1
estimation of ‘‘structural failure probability’’ based on two Z 1
classical reliability analysis techniques (i.e., MCS and ¼ 1 FS ðdÞf R ðdÞdd (2)
1
importance sampling (IS)).
 Study the impact of random variable cross-correlation on where the randomness of R and S is expressed by probability
the failure probability. density functions (PDFs) fR and fS, while FR and FS are the
 Generalization of the estimated failure probability for class corresponding cumulative density functions (CDFs).
of dams for the first time. In the case that several random variables are contributing,
 Proposing a sigmoid-type analytical model for the failure the overall failure probability of the system is determined as
R
probability. Pf = {x:G(x)0}fX(x)dx. Since the integration domain is the only
implicitly available, the direct estimation of Pf is very difficult
(and impossible in many cases). Thus, Pf can be estimated by
1.3. Organization of the paper calculation of the expectation of a binary (safe/fail) classifier If
as [19]:
The structure of the paper is as follows: Section 1.1 provides
the existing literature review on the topics investigated and 
1 if GðxÞ0
the contribution of the authors are briefly summarized in Pf ¼ E½If ðXÞ; If ðxÞ ¼ (3)
0 if GðxÞ > 0
Section 1.2. It is followed by reviewing the time-variant and
time-invariant structural reliability (Section 2), as well as
machine learning based classification techniques used in this 2.2. Time-variant case
work (Section 3). Next, the case study is introduced and the
random variables are identified (Section 4). Results are Demands on the structures as well as the structural capacity
provided in Section 5 for three hazard models (pilot, general- are often not constant, but they can change over time. It
ized and analytical models). Finally, the concluding remarks implies that the probabilistic analysis does not only include
are summarized in Section 6. random variables, but also random function of time. Fig. 1(b)
shows an example of time-dependent system. At t0 (construc-
tion time), the PDFs of demand and capacity models are far
2. Structural reliability analysis from each other with a (codified) safety factor. Structural
resistance (capacity) often decreases as a function of time due
In the classical structural reliability, the safety is assessed by to deterioration (e.g., alkali aggregate reaction), while the
developing a so-called ‘‘margin of safety’’ or a ‘‘limit state demand may increase over time (e.g., accumulated debris in
function’’, G. Based on the nature of the investigated problem, the reservoir). Two PDFs are first crossing (assuming that they
a deterministic, time-invariant probabilistic, or time-variant are truncated model) at ti while the failure probability is
(time-dependent) probabilistic versions of the limit state practically zero. Further deterioration of the system or
function can be written as: increase of the demand lead to overlapping PDFs (the over-
archives of civil and mechanical engineering 18 (2018) 592–610 595

Fig. 1 – Concept of the reliability analysis with basic random variables.

lapped area is interpreted as Pf). There are various methods to Detailed time dependent models for cracking, load increas-
incorporate the temporal effects of both the demand and ing and material degradation will be discussed in Section 4.2.
capacity in the reliability assessment (e.g., time-integrated and
discrete approaches) [21]. In the present paper, the time- 2.3. Simulation based techniques
variant failure probability is written as:
Estimation of Pf is the challenging part in reliability analysis
Z 1
[20]. There are many methods to achieve this goal depending
Pf ðtÞ ¼ P½RðXðtÞÞ < SðXðtÞÞ ¼ FR;t ðdÞf S;t ðdÞdd (4) on the problem type, the required accuracy and computational
0
time, e.g., FORM, second-order reliability method (SORM),
where FR,t and fS,t are the instantaneous CDF of R and the crude MCS, and MCS with importance sampling (IS) [23], MCS
instantaneous PDF of S at time t respectively, assuming that with Latin Hypercube sampling (LHS) [24], and subset simula-
R and S are statistically independent [22]. tion (SS) [25]. In this paper, two simulation based techniques
Next, the time interval (t0, tf) (where t0 = 0 is the initial or are used and briefly discussed below. Crude MCS is used as it is
construction time and tf refers to the final or expected life time the reference method in structural reliability, while IS is used
of the structure) can be divided into n non-overlapping time specifically for ‘‘rare events’’.
instants, t1, t2, . . . tn (tn = tf) and the probability of failure can be Rare events refer to those with low frequency of occur-
reported in two ways: rence and encompass both natural hazards (e.g. earthquakes,
tsunamis, floods) and anthropogenic hazards (e.g. industrial
 Instant probability of failure, PIf , where the failure probability accidents) as well as their interactions. Rare events have
is calculated based on the instant status of both R and S, ‘‘small failure probability’’, usually in the order of 105 
Eq. (2). This model only considers the system behavior at 102. Thus, the standard random generation techniques
time t = ti and there is no condition on the previous failures. cannot be easily applied, as they require a large amount of
 Cumulative probability of failure, PCf , in which the failure simulations.
probability is calculated within the time interval (0, t] as:
PCf ð0; t ¼ 1P½ðRðXðt1 ÞÞ > SðXðt1 ÞÞÞ \    \ ðRðXðtk ÞÞ > SðXðtk ÞÞÞ 2.3.1. Monte Carlo simulation
In this method, the failure probability is directly calculated
Y
k
¼ 1 ð1PIf ðtj ÞÞ based on the joint PDF of all the random variables. An unbiased
j¼1 estimator of Pf is given by:
(5)

In this model, the failure probability at any time instant is 1 X


Nsim
^MCS ¼ N
conditioned on all the previous times. The final simplified P I ðx Þ ¼ fail (7)
f
Nsim j¼1 f j Nsim
formula is, in fact, a generalized equation for a system with
series components. where Nsim is the total number of simulations, Nfail number of
failed simulations, and the hat is the estimation sign.

Time-dependency of the random variables is simply ^MCS are:


The confidence intervals for Pf
modeled using the initial state and a temporal degrading, c
(t). In the most general form, they can be written as: 2 0 11=2 3
  1P^ MCS

P ^MCS 6
^MCS 2 P 41F1
a @ f A 7 5 (8)
 f f
2 N P ^MCS
RðtÞ ¼ R0 cR ðtÞ; R0 ¼ Rjt¼0 sim f
(6)
SðtÞ ¼ S0 cS ðtÞ; S0 ¼ Sjt¼0
596 archives of civil and mechanical engineering 18 (2018) 592–610

where F(.) is the standard normal CDF and a 2 [0, 1] is used to number of features (random variables) and each category yi is
calculate the bounds with confidence level of 1  a. a binary variable such that yi =1 corresponds to class C1
(failure region) and yi =+1 corresponds to class C2 (safe region).
2.3.2. Importance sampling Using the learned classification rule in the training phase,
Importance sampling is a technique to reduce the large a new observation x 2 Rp is assigned to one of these two
number of simulations (and consequently the variance of the categories.
responses) in the crude MCS specifically for rare events (e.g., In practice, the set of n training observations are used to
small failure probability). In this method, which was originally form the training matrix Xtrn ¼ ½x1 ; . . .; xn T 2 Rnp , where each
proposed by Harbitz [23], the idea is to concentrate the row corresponds to one observation. Prior to the training
distribution of sampling in the most important region [26]. One phase, observations are often preprocessed since features or
way this can be done is by moving the sampling center from random variables might be measured in different units [30].
the origin in the standard normal space to the ‘‘design point’’ There are two main techniques to preprocess the training data:
on the limit state function [27]. Design point is the closest point (1) rescaling and (2) standardization. In the first method, also
from the origin to the limit state function (usually estimated by known as Min–Max scaling, features or random variables are
FORM). Consequently, a new sampling PDF, hX(x) is defined to rescaled to the range in [0, 1] by computing the maximum and
obtain the samples in the desired region and the failure minimum values. However, the second method computes the
probability can be approximated with the similar analogy of sample mean and standard deviation of random variables.
Eq. (7) as: Then, the features in each column of Xtrn are standardized by
subtracting the mean and dividing them by the standard
deviation. As a result, random variables will be rescaled so that
1 X
Nsim
^IS ¼ f X ðxj Þ they have the properties of a standard normal distribution
P If ðxj Þ (9)
f
Nsim j¼1
hX ðxj Þ with zero mean and unit standard deviation. The second
method is employed in the present work since it has been
As it is clear, the appropriate choice of hX(x) facilitates shown to be crucial in clustering analyses [31].
implementation of this method. Melchers [20], Melchers [28] In this paper, three popular classification algorithms are
and Au and Beck [29] proposed different techniques to use the studied and contrasted: (1) K-nearest neighbor (KNN), (2)
importance sampling along with structural reliability. support vector machine (SVM), and (3) naive Bayes classifier
(NBC), Fig. 2. These algorithms cover both deterministic and
probabilistic classification approaches and they are among top
3. Classification techniques ten data mining algorithms [32]. The training procedure,
properties, and computational complexity of these three
In machine learning, classification is the problem of assigning techniques are explained through Sections 3.1, 3.2 and 3.3.
new observations to one of the finite numbers of discrete
categories. Therefore, the goal of classification is to learn a 3.1. K-nearest neighbor
model that makes accurate predictions on new observations
based on a set of training data points. To be formal, let x1, . . ., The K-nearest neighbor algorithm is one of the simplest
xn be a set of training observations in Rp with corresponding classification techniques in machine learning. Given a new
categories y1, . . ., yn, where yi 2 { 1, + 1}. Here, p denotes the observation x 2 Rp , K training observations from the rows of

Fig. 2 – Three classification techniques in machine learning. From left to right: K-nearest neighbor (KNN), support vector
machine (SVM), and naive Bayes classifier (NBC). In KNN, a new observation is classified based on the labels of K nearest
neighbors. SVM solves an optimization problem to find a separating hyperplane with maximum margin and a new
observation is classified based on its position. NBC is a probabilistic approach to find the most likely class for a new
observation using Bayes theorem.
archives of civil and mechanical engineering 18 (2018) 592–610 597

Xtrn closest in distance to x are found. Then, x is classified Under this setup, the points that are closest to the
using the majority vote among these K nearest observations separating hyperplane, known as ‘‘support vectors’’, lie on
from the training set. Therefore, the KNN algorithm is the following hyperplane:
sensitive to the local structure of the data and its performance
 
depends heavily on the choice of K. yi wT ’ðxi Þ þ b ¼ 1 (14)
To reduce the sensitivity of KNN, one possible approach is
to weigh the contribution of each of the K nearest neighbors This hyperplane and the separating hyperplane in Eq. (11)
according to their distance to observation x. For example, the are parallel (they have the same orientation or coefficient
class of each of the K nearest training observations is vector w) and the distance between them is 1/k wk, where kwk
multiplied by a weight which is proportional to the inverse is the Euclidean norm of w. Thus, the hyperplane which gives
of the distance from x. Thus, greater weight is given to closer the maximum margin is obtained by maximizing kw k 1,
neighbors. The classification rule of weighted KNN for a new which is equivalent to minimizing kw k 2, subject to the
observation x and its K nearest neighbors x1, . . ., xK with labels constraints given in Eq. (13).
yi, i = 1, . . ., K, can be written as: To solve the above constrained optimization problem the
! Lagrange multipliers method is often used which results in
PK
i¼1 wi yi 1 maximizing the following term with respect to a [34]:
f ðxÞ ¼ sign PK ; wi ¼ (10)
i¼1 wi
kxxi k
Xn
1X n X n
~
LðaÞ ¼ ai  a a y y kðxi ; xj Þ (15)
where wi is the distance weighting function. 2 i¼1 j¼1 i j i j
i¼1

3.2. Support vector machine subject to the constraints:

Support vector machine is one of the most popular classifica- 8


< ani 0; i ¼ 1; . . .; n
>
tion algorithms. The determination of the model parameters X (16)
can be written as a simple optimization problem by using the >
: ai yi ¼ 0
i¼1
Lagrange multipliers method. Therefore, one can use standard
off-the-shelf optimization softwares to find the required Here, the ‘‘kernel function’’ is defined by k(xi, xj) = w(xi)Tw(xj),
parameters. which represents the dot product between two mapped data
The key idea of SVM is to map the training observations points w(xi) and w(xj). Given the optimal solution a
¼
from Rp into a high (maybe infinite) dimensional feature space ½a
1 ; . . .; a
n T in the training procedure, a new data point xtest
so that the mapped data is linearly separable. Therefore, there is classified according to the sign of f(xtest):
exists at least one separating hyperplane of the form (because
of linear separability assumption [33]): X
n
f ðxtest Þ ¼ a
i yi kðxtest ; xi Þ þ b
(17)
i¼1

f ðxÞ ¼ wT ’ðxÞ þ b (11) where b* can be found by substituting the optimal value w
¼
Pn

i¼1 ai yi ’ðxi Þ in Eq. (14) and can be expressed in terms of the


where the operator ’ : Rp ! RH represents the transformation;
kernel function.
w 2 RH is the coefficient vector that determines the orientation
of the hyperplane, and b 2 R is the bias term. Among all Note that the separating hyperplane is optimized without
possible hyperplanes, the one with the largest margin is se- ever having to explicitly compute the coordinates of data
lected, where margin is defined as the smallest distance be- points w(xi) in feature space. Thus, employing kernel functions
tween the hyperplane and any of the training observations. is an efficient approach to perform nonlinear classification by
After determination of the model parameters w and b, a new ‘‘implicitly’’ mapping training observations into feature space.
test data point x is classified according to the sign of f(x) (recall The two commonly used families of kernels are polynomial
that the corresponding target for each observation is either 1 kernels and radial basis functions (RBF) [35,36]. The polyno-
d
or þ1). mial kernel function is of the form kðxi ; xj Þ ¼ ðxTi xj þ 1Þ with
parameter d 2 N. On the other hand, RBF (also known as
Now, we explain the problem formulation and optimization Gaussian kernel) takes the form k(xi, xj) = exp(g k xi  xj k 2)
to find the separating hyperplane in SVM. Since the mapped with parameter g 2 Rþ .
data is linearly separable, all the training observations x1, . . .,
xn satisfy the following constraints: 3.3. Naive Bayes classifier

 Naive Bayes classifier is a probabilistic classification algorithm


wT ’ðxi Þ þ b þ 1 for yi ¼ þ1 which is based on Bayes theorem. In the training phase, this
(12)
wT ’ðxi Þ þ b1 for yi ¼ 1
method estimates the parameters of a probability distribution
and these two constraints can be combined into one set of which is often assumed to be the normal distribution for
inequalities: continuous variables. Then, a new observation x is classified
based on the probability that x belongs to each class. To
 
yi wT ’ðxi Þ þ b 1 0 for all i ¼ 1; . . .; n (13) explain this classification technique, Bayes theorem is
adopted.
598 archives of civil and mechanical engineering 18 (2018) 592–610

Given a new observation x, the main idea behind Naive The first author performed transient analysis accounting
Bayes Classifier is to find the following probability for each of for the soil-fluid-structures interaction, material and geometry
two classes known as posterior probability: nonlinearities [5], and performance based earthquake engi-
neering research [2] for concrete dams. However, LEM is used
in this paper in conjunction with explicit limit state functions
PðCj jxÞ; j ¼ 1; 2: (18)
to facilitate the machine learning based classifications. This
where PðCj jxÞ is the probability of observing Cj given that x is method is followed by many regulators/countries and is based
true. on experiences and engineering judgment [37]. Moreover, it is
the basis of several recent works, e.g., Carvajal et al. [8],
Then, x is assigned to the class that has the maximum Westberg [38], Altarejos-Garcia et al. [10], Westberg Wilde and
posterior probability: Johansson [39], Huaizhi et al. [40] and more recently Morales-
Torres et al. [41]. In the most general form, the limit state
function is:
b
C ¼ argmax PðCj jxÞ: (19)
j 2 f1;2g
Z ¼ f ðT; W; U; f; c; A; Lcr ; tÞ (22)
To find the posterior probability in Eq. (18), Bayes theorem is
where T is shear force, W is the weight, U is uplift force, w and c
used to get the following reformulation:
are angle of friction and cohesion at the considered plane
respectively, A is the area of rupture (for unit thickness), Lcr
PðxjCj ÞPðCj Þ is the pre-existing crack length, and t is time to be used in time-
PðCj jxÞ ¼ : (20) variant problems.
PðxÞ

In this equation, the denominator does not depend on the In the context of LEM, two main limit state functions can be
class Cj , which means that the denominator is effectively developed: (1) Z1(t) sliding limit state (at dam-foundation
constant. Therefore, the main difficulty in finding the posterior interface or along any lift joint), and (2) Z2(t) overturning limit
 
probability is to calculate the term P xjCj in the numerator. To state (around the dam's toe):
solve this problem, a ‘‘naive’’ assumption is considered: the p
features of new observation x 2 Rp are conditionally indepen- Z1 ðtÞ ¼ ðWðtÞUðtÞÞtan’ þ cðtÞðAAcr ðtÞÞTðtÞ
Xn X
m
dent given the class. Thus, the classification rule for the new Z2 ðtÞ ¼ ðFR ðtÞ:dR ðtÞÞi  ðFS ðtÞdS ðtÞÞj (23)
observation vector x = [x1, . . ., xp]T is given as: i¼1 j¼1

where Acr = Lcr  1 is the pre-existing cracked area with the


Y
p unit thickness, FR(t) and FS(t) are time-variant resisting and
b
C ¼ argmax PðCj Þ Pðxi jCj Þ: (21) driving forces, respectively; and dR(t) and dS(t) are the corre-
j 2 f1;2g i¼1
sponding moment arms around the dam's toe [39]. The pa-
rameters n and m are values of resisting and driving force
4. Studied cases components (resulted from different loads and segments of
dam body). Note that these limit state functions can be used
for all hazard models. The time parameter is only active for
Three main examples are studied in this paper corresponding
‘‘temporal’’ hazard. Magnitude of the resisting and driving
to three natural hazard risks, i.e., hydrological, aging, and
loads depend on the random variables and hazard intensity.
seismic. All examples follow the classical reliability analysis of
gravity dams based on limited equilibrium method (LEM) in The model parameters are listed in Table 1. Depending on
which the dam is assumed to be a rigid body and the sliding is the analysis type, the natural hazard, and the potential risk
only allowed along the critical surfaces (i.e., concrete-rock these parameters may take deterministic or probabilistic
interface and concrete lift joints). Thus, only the magnitude values. Considering the three types of hazards that are
and the moment arm of the resultant loads are important. No investigated in this paper, the load combinations and the
internal stress is evaluated in this method. It is noteworthy effective parameters are discussed separately. Note that for
that the whole framework in this paper can be applied in most of the parameters the appropriate distributional model is
conjunction with more complex finite element techniques; selected based on the current literature. For the new/unknown
however, the overall run time will be increased enormously. models one may use either normal or uniform distribution.
Such methods allow us to evaluate the internal stresses. The former is appropriate when there is some information
Readers should notice that for detailed safety assessment about the central values (e.g., mean), while the latter is used in
of concrete dams, specially under the collapse scenario, the case that the information is only limited to upper and
employing the nonlinear time history analysis is recom- lower bounds. In fact, normal and uniform distributions are
mended for final design purposes. This is important because identical in one critical information theoretical sense: ‘‘they
there are many construction and design limitations as well as both have maximum entropy’’. Uniform distribution has max
material heterogeneity and loading variations which are not entropy over a compact interval and normal distribution has
considered in the initial simplified analysis and design stage. maximum entropy over the real numbers with a specified
All these conditions may (or may not) lead to the non-uniform variance, i.e., a particular moment.
distribution of demand and capacity and facilitate (or Fig. 3 shows all the applied loads on the dam as well as
accelerate) the failure process. dimensions. Magnitude and distribution of the uplift pressure
archives of civil and mechanical engineering 18 (2018) 592–610 599

Table 1 – Model parameters for analytical limit state function [8,39].

Parameter Symbol Unit Distributional model


Width at the base B1 m Parametric (dam classes)
Width at the crest B2 m 0.08  B1
Height of the dam H1 m Parametric (dam classes)
Height of the neck H2 m 0.06  H1
Location of drainage Ld m 0.15  B1
Crack at the base Lcr m Uniform (depends on B1)

Height of the water Hw a m Lognormal (depends on H1)


Height of the silt layer Hs m Normal (depends on H1)
Concrete mass density rc kg/m 3 Normal
Water mass density rw kg/m 3 Deterministic
Silt mass density rs kg/m 3 Deterministic
Rock-concrete cohesion crc MPa Lognormal
Rock-concrete friction angle frc deg. Normal
Drain efficiency effD - Uniform (0, 1)
Silt internal friction angle fs deg. Deterministic

Aging time tage year t0 : tf


Seismic coefficient agm – 0 : amax
Earthquake vibration period te s Uniform
a
Height of the water, Hw , is equal to Hf (flooding height) when it exceeds H1.

Fig. 3 – Model description; loads and geometry.

is changed based on location of drainage and crack length. is indeed achieved by performing FORM based sensitivity
Moreover, the class of gravity dams are shown in this figure analysis on all the initial random variables). Waves due to the
where HB1 ratio varies 0.5:0.1:1:0. The one with HB1 ¼ 0:7 is sudden landslide into the dam's reservoir may (or may not)
1 1
considered as the standard shape. also cause a large driving force. The magnitude of such a force
depends on the topology of the valley. In any case, its hazard
4.1. Hydrological hazard can be quantified similar to the flooding condition.
The stress, S, in Eq. (1) is applied by hydrostatic, uplift and
In this example, it is assumed that the gravity dam is subjected silt pressures, while the resistance, R, is dam-foundation
to flooding load. The applied loads are: (1) self-weight, (2) uplift interface. Only one limit state function (sliding at dam-
pressure, (3) hydrostatic pressure (based on flood level), and (4) foundation interface which is the predominant failure mode)
silt pressure. Note that based on the authors preliminary is considered to facilitate comparison of different models. In
research, the ice pressure, wind load, and the surface wave situations where other failure modes are taken into account
load do not have any dominant impact of the failure (e.g., overturning), the final failure probability is composed of
probability and thus they are ignored in this research (this all limit state functions in the series and/or parallel modes [40].
600 archives of civil and mechanical engineering 18 (2018) 592–610

4.2. Aging hazard upper bounds are 85% and 99.9% of H1. Sampling at any time
interval is based on a normal distributional model, Fig. 4(b).
As it is discussed in Section 2.2, time-variant reliability is Silt height increases with time as the sediment accumu-
considered by simply using a temporal degrading or increasing lates behind the dam. The following empirical model is
function, c(t), (see Eq. (6)) for random variables. The applied proposed to account for time-dependent alluvium height:
loads on the system are identical to the hydrological hazard,
except that the reservoir operates in the normal water level.
Time dependent uplift pressure is automatically adjusted as a cHs ðtÞ ¼ Kðtts Þb (26)
function of reservoir water level, crack length and drain where K and b are the constants optimized based on site
efficiency, Fig. 3. In this paper, six random variables are observations. In this paper, they are assumed to be 0.0085
assumed to be changed by time, Fig. 4. The following describes and 0.5, respectively. Moreover, ts refers to the delay in silt
the quantification of these time dependent empirical models: accumulation time. It corresponds to the time interval in
wCrack length is assumed to increase linearly by time. No which the silt is not fully passed through the gates and starts
repair is expected within the life span of the dam. At any given to stack in the dam's reservoir. Consequently, the mean,
time interval, the crack length is a uniform distribution bounded standard deviation, and the bounds are computed as:
between the minimum and maximum values, Fig. 4(a):
8
8 >
> 0
8 for tts
>
>>m : H1 cHs ðtÞ
<0 for tts < >

tts
< s: COVH1 cHs ðtÞ
Lcr ðtÞ ¼ for t > ts (24) H
s ðtÞ ¼ (27)
: t t Lcr >
>>U : UBH1 cHs ðtÞ
for t > ts
>
:>
>
f s
:
L: LBH1 cHs ðtÞ
where ts is the cracking start time, tf is the final (life) time, *
refers to either upper (LU L
cr ) or lower (Lcr ) bound of the crack
where superscript * refers to m (mean), s (standard deviation), U
U L
length. In this paper, Lcr and Lcr are 5% and 40% of the total base (upper bound), and L (lower bound). In this paper, the coeffi-
length (B1), respectively. cient of variation is COV = 0.6, while LB and UB are assumed to
Water height at the upstream face (headwater) varies based be 0.1 and 3.0, respectively. A normal distributional model is
on the following regime [40]: assumed for the Hs, as shown in Fig. 4(c).
Cohesion at the rock-concrete interface is assumed to be
8  deteriorated based on the normalized stochastic model
>
> 1
< Hmw ðtÞ Hmw0 þ 3:5 1 p
4
ffiffi Hsw0 proposed by Li et al. [22]:
t (25)
>
> 1 s
: Hsw ðtÞ p
4
ffiffi Hw0
8
t Xn
>
> ^ Þeðt Þ
>
> cc ðtn Þ ¼ 1 dðt
where m and s refer to the mean and standard deviation, >
>
i i
< i¼1
respectively, and subscript zero is the initial value (first opera- ^ Þ ¼ ktm Dt
dðt (28)
>
>
i
tion). In this paper, Hmw0 and Hsw0 are assumed to be 90% and 1.5% >
> ^
dðtÞ j
>
>
of the dam height (H1), respectively. Moreover, the lower and : aðtÞ ¼ j ; bðtÞ ¼ ^
dðtÞ

Fig. 4 – Empirical time dependent models for the random variables; in (a) and (f) the black line means the bounds for the
uniform distribution and the green bars are showing the histogram; in (b), (c), (d) and (e) the black solid line shows the mean,
the dashed black lines are mean Wstandard deviation, and the dashed blue lines are the lower/upper bounds.
archives of civil and mechanical engineering 18 (2018) 592–610 601

where dðt^ Þ is the time-dependent mean degradation occurring 4.3. Seismic hazard
i
between ti1 and ti (describing the shape of degradation mod-
el), Dt is the time separation between ti1 and ti (usually taken In this example, it is assumed that the gravity dam is subjected
as one year), k and m are the scale factor and the shape factor, to earthquake loading. There are various methods to model
respectively. The parameter e(ti) is a sequence of independent the loads/stresses due to an earthquake event on the dam.
random variables, dðt ^ Þ; and is described by a Gamma distribu- Based on Hariri-Ardebili [2], one may categorize these methods
i
tion, with time-dependent shape factor a(t) and scale factor b as: (1) pseudo-static analysis, (2) pseudo-dynamic analysis, (3)
(t). Finally, j is a constant parameter defining the variation linear time history analysis, (4) nonlinear time history
associated with cc(tn). analysis, (5) narrow-range nonlinear analyses, and (6) wide-
The degradation function can be simplified as a Gamma range nonlinear analyses. Many aspects influence the selec-
distributional model with the following mean and standard tion and application of an appropriate method [43]; however,
deviation [22]: in this paper, pseudo-static analysis (also known as seismic
coefficient method) is used. The main reason can be attributed
8
>
> k to simplicity of this method and a straightforward procedure
< cmrc ðtÞ ¼ 1 tmþ1
mþ1
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (29) for combining it with explicit limit state functions and provide
>
> k j mþ1
: csrc ðtÞ ¼ t a fully analytical model for the reliability assessment.
mþ1
In the seismic coefficient method (which is originally
In this paper, k, m and j are assumed to be 0.00007, 0.8 and developed for design not analysis), the earthquake loading is
0.4, respectively. Initial mean and standard deviation are treated as an inertial force applied statically to the structure
crc 0m ¼ 0:65 and crc 0s ¼ 0:2 MPa, respectively. Finally, constant [44]. Two additional loads are applied to the system (compared
lower and upper bounds equal to 0.5 and 1.05 times crc 0m are to the hydrological hazard): (1) inertia force due to the
considered for sampling purposes, Fig. 4(d). horizontal acceleration of the dam, and (2) hydrodynamic
Friction angle is assumed to be deteriorated based on a force. The former one is computed by the principle of mass
survivor function proposed by Yang et al. [42]: times the earthquake acceleration and acts through the center
of mass. The seismic coefficient, agm, is defined as the ratio of
g
cf ðtÞ ¼ elf ðtts Þ (30) the horizontal ground acceleration to gravity (and in no case
can it be related directly to acceleration from a strong motion
where lf is the failure rate, g the shape factor, and ts is the
instrument) [44].
delay in deterioration starting time. Consequently, the mean,
On the other hand, the hydrodynamic pressure, Phyd, and
standard deviation and the bounds are computed as:
consequently the force, Fhyd, on the upstream face of the dam
may be computed by means of Westergaard [45] parabolic
88 approximation:
> >m : frc 0m
>>
>
> <
s: frc 0s
>
>
>
> U: frc 0m þ UB
for tts
> > 8
<>
> : pffiffiffiffiffiffiffi
< Phyd ðyÞ ¼ Ce agm Ku Hw y1=2

8 L: frc 0m LB
frc ðtÞ ¼ (31) pffiffiffiffiffiffiffi
>>
> >m : frc 0m cf ðtÞ
: Fhyd ðyÞ ¼ 2 Ce agm Ku Hw y3=2
(33)
>
> <
>
> s: frc 0s c1f ðtÞ 3
>
> for t > ts
> >
::U :
> > fmrc ðtÞ þ UB
where y is the water depth measured from the free surface
L: fmrc ðtÞLB
down to the dam's base, Ku is a correction factor to account for
where superscript * refers to m (mean), s (standard deviation), U upstream slope (unit for dams with vertical face), and Ce is a
(upper bound), and L (lower bound). In this paper, frc 0m ¼ 30, correction factor to account for water compressibility and is
frc 0s ¼ 5, LB = UB = 15. Moreover, lf and g are assumed to be presented in SI unit ([KN], [m], [s]) as:
0.005 and 1, respectively. A normal distributional model is
assumed for frc, as shown in Fig. 4(e).
1
Drain efficiency is assumed to be reduced by time due to Ce ¼ 7:99Cc ; Cc ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 ffi (34)
2
Hw
clogged drains and no remediation action on the pipes is 17:75 1000 te
expected. At any given time interval, the drain efficiency is a
uniform distribution bounded between the minimum and where te is the period to characterize the ground acceleration
maximum values, Fig. 4(f): imposed on the dam.
Fig. 5(a) shows sensitivity of the hydrodynamic pressure for
8

different te values ranging 0.50:0.25:1.5 s. Three dams are


< eff D s
>
tts
for tts
considered with full reservoir where the heights are 50, 100

eff D ðtÞ ¼ eff D s


þ ðeff D c
eff D s
Þ for ts < ttc (32) and 150 m. Also, the seismic coefficient is assumed to be 0.1 in
>
: tc ts
eff D c
for t > tc all cases (it does not affect the relative results). As seen, there is
where ts is the start of clogging, tc is the clogging completion, * no difference among the hydrodynamic pressure parabolic
refers to either upper (effDU) or lower (effDL) bound of drain curves for Hw ¼ 50 m. Variation is negligible for the case with
efficiency. In this paper, eff D sU , eff D sL , eff D cU , eff D cL are as- Hw ¼ 100 m, while te is quite important for the large dams, i.e.,
sumed to be 100%, 90%, 40% and 0%, respectively. Moreover, ts Hw ¼ 150 m. On the other hand, 5(b) shows the relative
and tc are set to 20 and 120 years. importance (and magnitude) of the hydrodynamic pressures
with increasing agm compared to the hydrostatic pressure for
602 archives of civil and mechanical engineering 18 (2018) 592–610

Fig. 5 – Impact of te and agm on the hydrodynamic pressure.

the standard dam (i.e., H1 ¼ Hw ¼ 100 m). As seen, the value of in PREDICT to estimate the responses for a set of standardized
hydrodynamic pressure reaches the hydrostatic one at the test data points.
base under a quite large seismic coefficient value (i.e.,
agm = 1.2). However, as it will be shown later, the probability
Algorithm 1. Estimation of Pf via machine learning techniques
of failure increases drastically when the dam is subjected to a
seismic hazard.
Input: Xtrn 2 RNtrn 6 , ytrn 2 RNtrn , Xtest 2 RNtest 6 , classification-
method
Output: estimated failure probability P ^f
5. Results Training classifier:
1: for i = 1, . . ., 6 do
This section investigates the application of classical reliabili- 2: mi =MEAN(Xtrn(:, i))
ty analysis as well as three machine learning based 3: si =STD(Xtrn(:, i))
4: XÄtrn ð:; iÞ ðXtrn ð:; iÞmi Þ=s i
classification techniques, explained in Section 3, to estimate
5: end for
the failure probability of gravity dam classes. In each 6: ifclassification-method=KNN then model FITCKNN(XÄtrn ; ytrn )
classification technique, a set of Ntrn training data points 7: end if
in R6 (for hydrological and aging hazards and later R8 for 8: if classification-method=SVM then model FITCSVM(XÄtrn ; ytrn )
seismic hazard) and their corresponding labels are given as 9: end if
the input. These training data points can be viewed as the 10: if classification-method=NBC then model FITCNB(XÄtrn ; ytrn )
11: end if
rows of the matrix Xtrn 2 RNtrn 6 , where each column
Estimation via trained classifier:
represents one of the six random variables used in the
12: for i = 1, . . ., 6 do
analyses. The labels are also stored in the column vector 13: XÄtest ð:; iÞ ðXtest ð:; iÞmi Þ=s i
ytrn 2 Rtrn , where each entry is either þ1 (for safe) or 1 14: end for
(for fail). 15: ^
ytest PREDICT(model; X Ätest )
Three classification techniques are used to learn a classifier 16: ^f ¼ jfi : ^
P ytest ðiÞ ¼ 1gj=Ntest
from the training data that allows us to predict responses
(safe/fail) for a new set of test data points. Thus, the learned For each hazard model, the failure probability can be
classification rules can be used to estimate the failure discussed in three levels, i.e., (1) pilot dam, (2) generalized
probability. Similar to the training phase, test data points model, and (3) analytical solution for the generalized form.
are stored as the rows of matrix Xtest 2 RNtest 6 . The goal is to These three levels are fully explained for the hydrological
predict the response vector ^ ytest 2 RNtest by using three hazard (Section 5.1). To avoid duplication (and limited page
classification techniques. Finally, the failure probability can numbers), only the generalized model is discussed for two
be estimated as the number of failed cases, i.e., number of other hazard models (Sections 5.2 and 5.3). Developing the
entries of ^ ytest that are equal to 1, normalized by the total analytical solution is straightforward as will be discussed in
number of cases, Ntest. Given the estimate of failure probability Section 5.1.3 and can be easily repeated for the aging and
^f , the normalized estimation error is used to measure the
P seismic hazard models.
accuracy which is defined as jP ^f Pf j=Pf .
The overall training and estimation procedure are sum- 5.1. Hydrological hazard
marized in Algorithm 1. In order to make the implementation
of the proposed method easier for interested readers, 5.1.1. Pilot model
MATLAB's built-in functions are used in the given algorithm This first example deals with hydrological hazard on the
[46]. Here, Xtrn(:, i) denotes the i-th column of matrix Xtrn. As standard dam (HB1 ¼ 0:7). Material and modeling are selected
1
mentioned in Section 3, the features or random variables based on Table 1. More specifically, they can be summarized as
(columns of Xtrn) should be standardized by computing the follows: B1 = 70 m, H1 = 100 m, Ld = 11.25 m, Lcr = U(0, 28) m,
mean and standard deviation. The three classification Hw ¼ LNð85; 20Þ m, Hs = N(10, 6) m, rc = 2400 kg/m3,
3
methods can be implemented in MATLAB using FITCKNN, rs = 1850 kg/m , crc = LN(0.6, 0.15) MPa, frc = N(30, 7) deg.,
FITCSVM, and FITCNB. The learned classification rule is then used effD = U(0, 0.99), fs = 30 deg. Note that in most cases the values
archives of civil and mechanical engineering 18 (2018) 592–610 603

of the normal and lognormal distributions are truncated to IS. This means that the polynomial kernel function is capable
avoid unreal (and not necessarily negative) values. Classical of dealing with the nonlinearity in the structure of the data in
structural reliability is performed first based on crude MCS and this example. Furthermore, the accuracy of SVM using MCS-IS
IS technique, Fig. 6. As seen, both these methods result in the is nearly identical to crude MCS with an order of magnitude
same Pf (MCS-IS underestimates crude MCS with 4%). However, less training data points Ntrn. This comes from the fact that
the number of simulations required for crude MCS is 100 times importance sampling provides a better sampling of the global
of MCS-IS (1e6 vs. 1e4). Variation of mean Pf is relatively high in structure and, thus, facilitates finding the separating hyper-
crude MCS up to 1e5, showing that in no case the number of plane.
simulations should fall below this value. However, MCS-IS Finally, the accuracy of NBC is investigated for both crude
results are stable even after 5000 simulations. Shown in Fig. 6 MCS and MCS-IS in Fig. 7(d) and (h). It is observed that the
is also the confidence intervals based on Eq. (8). Although MCS- Naive Bayes Classifier using MCS-IS has higher accuracy than
IS shows a narrower confidence interval at the same Nsim = 1e4 crude MCS. In fact, the estimation error is reduced by a factor of
compared to crude MCS; for a complete Pf assessment (with 1e6 2 using MCS-IS.
simulations for crude MCS and 1e4 for MCS-IS), the crude MCS
has a narrower confidence interval. 5.1.2. Generalized model
Next, the performance of three classification methods are Results of the pilot model in the previous section is generalized
studied for a various number of training data points. The value for different dam classes; see Fig. 3. Different classes of dams
of Ntrn varies from 1e3 to 1e4 and from 1e2 to 1e3 in crude MCS are distinguished by HB1 ratio. Other characteristics of the model
1
and MCS-IS, respectively. A set of 1e6 test data points are used are parametrically changed based on Table 1.
to estimate the failure probability as described in Algorithm 1, Fig. 8(a) illustrates general class of gravity dams, where H1
which is compared with true value of Pf (which is assumed to varies from 60 to 160 m and B1 takes values from 40 to 90 m.
be PMCS
f in Fig. 6(a)). Fig. 7 reports the mean and standard This wide range of HB1 ratio is used to determine the failure
1
deviation of the normalized estimation error over 50 trials. In probability of dams with proportional water level (mean of Hw
each trial, Ntrn data points are chosen randomly from the total changes proportional to H1 while standard deviation is kept
number of 1e6 data points for crude MCS and 1e4 data points constant). Also, in this figure, the red rectangle represents a
for MCS-IS. The normalized estimation error for each trial is narrow range of HB1 ratio in which H1 is 100 m, while Hw varies
1
defined as jb Pf Pf j=Pf , where b
Pf and Pf are the estimated and from 80 to 110 m. Finally, the green square in this figure is the
true values of the failure probability, respectively. We compute location of the standard dam with HB1 ¼ 0:7 (already studied as a
1
the mean and standard deviation of 50 normalized estimation pilot model). Fig. 8(b) shows the cases where increasing H1 and
errors. reducing B1 lead to an increase in the failure probability, while
It is observed that the KNN classification method has much Fig. 8(c) shows increasing water level, increases the failure
higher accuracy on crude MCS compared to MCS-IS. This is probability for the constant HB 1 ratio.
1
mainly due to the fact that KNN depends heavily on the local Based on the previous results in Section 5.1.1 for the pilot
structure of the data which may not be preserved under model, four sets of experiments are performed in this section:
importance sampling assumptions. Moreover, based on Fig. 7 (1) SVM using crude MCS, (2) SVM using MCS-IS, (3) KNN using
(a), KNN method with K = 1 performs more accurate compared crude MCS, and (4) NBC using MCS-IS. In experiments with
to K = 2, 3. To explore the possibility of improving the accuracy crude MCS, a set of Ntrn = 1e4 data points are chosen randomly
using weighted KNN (WKNN), results are reported in Fig. 7(b) from the total of 1e6 data points for training classifiers.
and (f). It is concluded that the WKNN method nearly has Learning classifiers using MCS-IS is based on random sampling
identical accuracy as KNN for both crude MCS and MCS-IS. of Ntrn = 1e3 training data points from 1e4 data points.
Therefore, it is reasonable to use KNN instead of WKNN due to In Fig. 9, the mean of Pf estimation error over 50 trials is
less complexity. reported for two cases: (1) varying (H1, B1) with proportional Hw
The accuracy of SVM classification technique using both (corresponds to Fig. 8(b)), and (2) varying ðHw ; HB 1 Þ (corresponds
1
linear and polynomial kernel functions is reported in Fig. 7(c) to Fig. 8(c)). According to Fig. 8, small values of two parameters
and (g). It is shown that the polynomial kernel function leads lead to very small failure probabilities. This means that it is
to more accurate estimates of Pf for both crude MCS and MCS- more difficult to learn a reliable classification rule in this

Fig. 6 – Estimated failure probability and the confidence intervals for the pilot hydrological hazard model; the blue line shows
the mean and the pink lines are confidence interval.
604 archives of civil and mechanical engineering 18 (2018) 592–610

Fig. 7 – Failure probability estimation error using various machine learning techniques for the pilot hydrological hazard
model.

Fig. 8 – Estimated failure probability and the confidence intervals for different dam classes under hydrological hazard risk; the
blue line shows the mean and the pink lines are confidence interval.

regime due to the small number of training data points from that is defined for all real input values and has a positive
the class that corresponds to failure. This observation can be derivative at each point [47].
easily verified in Fig. 9 since all classifiers have larger Thus, the question arises: is there an analytical model to fit
estimation errors for smaller values of (H1, B1) or ðHw ; HB 1 Þ. the results of reliability analysis? Hariri-Ardebili [2] already
1
However, as these quantities get larger, the estimation error proposed a sigmoid-type curve to be used in quantifying dams
decreases and it gets closer to zero as one expects. capacity curve, yet this can be used for reliability function too:
Based on Fig. 9(b) and (f), SVM using MCS-IS has the best
performance among four different classification techniques.
1ec3 xþc4
This observation confirms that importance sampling is a SðxÞ ¼ c1 þ c2 (35)
1 þ ec5 xþc6
successful technique to preserve the global structure of data
using small number of training data points Ntrn. Furthermore, where ci (i=1, 2,. . ., 6) are the model constants obtained from
it is seen that SVM and KNN using crude MCS have high nonlinear least-squares curve-fitting.
accuracy in estimating the failure probability Pf. Since Fig. 8(b) is the most complete set of analyses on
different dam classes, it is selected as the case study to
5.1.3. Analytical model examine the applicability of Eq. (35) in providing a general
So far, reliability analysis of gravity dams performed with analytical model for the failure probability. To prevent over-
MCS family and the results were estimated with three fitting of the results, coefficients c1 and c2 are set to zero and
machine learning techniques. In most cases, the failure one. Thus, the remaining model includes four coefficients to be
probability was presented as a direct function of one or two fitted by nonlinear least-squares optimization techniques [46].
parameters while the other variables were selected random- Shown in Fig. 10(a) is the fitted curves and the original data
ly. This resulted in a group of increasing curves/surface points. Quality of fitting is shown in Fig. 10(b), where all the
similar to sigmoid-type growth curves. By definition, a residuals are limited to 1.5%. The coefficient of determination
sigmoid function is a bounded differentiable real function is more than 0.99 in all six curves.
archives of civil and mechanical engineering 18 (2018) 592–610 605

Fig. 9 – Failure probability estimation error for different dam classes subjected to hydrological hazard.

So far, each curve presents dependency of Pf to H1. To IS, (3) KNN using crude MCS, and (4) NBC using MCS-IS. Similar
establish a relationship between these coefficients and B1, the to Section 5.1.2, a set of Ntrn = 1e4 data points are chosen
resulted coefficients (ci, i=3,. . ., 6) are plotted versus B1, Fig. 11. randomly from the total of 1e6 data points for training
As it is clear, there is a (semi-) linear relationship between ci classifiers using crude MCS. Learning classifiers using MCS-
and B1. To avoid further complexity in the model, these four IS is based on random sampling of Ntrn = 1e3 training data
relations are assumed to be linear functions of B1 and can be points from 1e4 data points.
written in the form of ci ¼ ðp1 Þi B1 þ ðp2 Þi . Again, the two In Fig. 13(a), the mean and standard deviation of the
parameters, slope and intercept, are found for four coefficients estimated Pf is reported and compared with the reference
ci. Consequently, the analytical model can be wrapped up in value of PMCS
f using 1e6 simulations. It is observed that all four
the following form: classification strategies lead to accurate estimates of Pf (i.e., the
differences between estimated and true values are negligible).
However, in the small scales, SVM family underestimates the
1eð0:0001888B1 0:02008ÞH1 þð0:008561B1 þ1:067Þ failure probability while, KNN and NBC overestimate Pf. To
Pf ðH1 ; B1 Þ ¼ (36)
1 þ eð0:0001975B1 0:04129ÞH1 þð0:07097B1 þ1:503Þ further investigate the performance of these methods, a plot of
The potential application of quadratic functions for ci estimation error in log scale is provided in Fig. 13(a). In general,
coefficients are also evaluated. Overall, the quality of failure the estimation error is reduced when time passes (therefore,
surface is improved less than 5%. However, the competing increasing the failure probability). Among four methods, SVM
linear and quadratic functions should be controlled based on using crude MCS has the best performance. Furthermore, SVM
the Akaike information criterion (AIC) [48]. AIC not only using MCS-IS has the most stable condition as the estimation
rewards goodness-of-fit, but also includes a penalty that is an error has smaller reduction rate. Finally, the Pf estimation error
increasing function of the number of estimated parameters. in NBC using MCS-IS is higher than SVM using crude MCS by
We found that AIC for linear relationships is better than one order of magnitude.
quadratic forms. The reported Pf and estimation error in Fig. 13(a) and
This model is plotted in Fig. 12 with very fine grid (as (b) were based on instant probability of failure conditions,
opposed to the results obtained from reliability analysis based PIf (see Section 2.2). All these results are based on standard
on very coarse grid). This model obviously can be used for all dam shape (HB1 ¼ 0:7) and are subjected to time dependent
1
the dam classes having similar loading and material proper- loads/degradation models explained in Section 4.2. One
ties. This paper is not intended to propagate this equation for major assumption in these analyses is that all the random
the other random properties (e.g., water level, cohesion, variables are uncorrelated. Fig. 13(c) investigates the
friction angle) and natural hazard risks (e.g., aging and impact of correlation between crc and frc on the failure
seismic). This is clearly subject of another paper with more probability. These two random variables are used because
details and larger number of base simulations. they are the only variables in the system that are connected
physically through the rock-concrete interface characteris-
5.2. Aging hazard tics. Also, shown in this figure is the comparison between
instant and cumulative failure probabilities (Eq. (5)). Correla-
In this section, four sets of experiments are performed to tion varies 0.0:0.25:1.0 (zero means two variables are
investigate the estimation of failure probability under the independent and one means they are fully correlated). Three
aging condition: (1) SVM using crude MCS, (2) SVM using MCS- main conclusions are:
606 archives of civil and mechanical engineering 18 (2018) 592–610

Fig. 10 – Fitting an analytical model to discrete data points from different dam classes under hydrological hazard.

Fig. 11 – Estimated coefficients and their relation with B1.

5.3. Seismic hazard

Seismic hazard is modeled by using seismic coefficient


method by adding the inertia force of the dam itself and
hydrodynamic pressure to demands, S, Eq. (1) on dam. Fig. 14
(a) shows the failure probability as a function of seismic
coefficient, agm. Two broad models are considered: (1)
uncorrelated random variables (zero), and (2) fully correlated
random variables (unit) between friction angle and cohesion.
Within each model, the value of te (see Eq. (34)) varies
Fig. 12 – Analytical failure probability for different dam 0.5:0.25:1.5. As seen, there is no difference among the curves
classes under hydrological hazard, Eq. (36). with varying te meaning that this variable can be assumed
constant for practical cases. The correlated and uncorrelated
curves are close and intersect at Pf 0.6. Before the crossing
point, fully correlated model provides higher Pf and vice versa.
 In the case of PIf , there is an intersection point that Fig. 14(b) shows the failure probability of different dams
corresponds to PIf 0:5 in which all the five curves are classes. In all cases, uncorrelated random variable assumption
passing through. Consequently, for lower failure probabili- is considered and te is one. As expected, decreasing the HB1 ratio,
1
ties, increasing the correlation coefficient decreases failure increases the failure probability. For example, comparing Pf for
B1 B1
probability and vice versa. H1 ¼ 0:5 and H1 ¼ 1:0 shows that the former is about five times
 In the case of PCf , increasing the correlation coefficient the latter at agm = 0.2 and twice at agm = 0.5. Based on this
decreases failure probability at any time. For example, the figure, even under very small agm values, there is a jump in Pf
ratio of cumulative failure probabilities at t = 100 years value.
between fully correlated and uncorrelated cases is about 1/2. Next, the simultaneous impact of seismic loading and
 As it is expected from Eq. (5), the cumulative Pf reaches unit reservoir water levels are studied on the failure probability,
faster than instant Pf as it depends on all the previous instant Fig. 14(c). Again the fully correlated and uncorrelated modes
partial failures. For example, the cumulative curves reach are contrasted. As in Fig. 14(a), the curves are crossing at
unity sometime between 110 and 120 years. However, the Pf = 0.6. Increasing Hw increases Pf. For the half reservoir (i.e.,
instant curves reach 50% probability of instant failure at Hw ¼ 50 m), the probability of failure is not unit even for a large
t 130 years. value of seismic coefficient (i.e., agm = 0.8). For the full reservoir
archives of civil and mechanical engineering 18 (2018) 592–610 607

Fig. 13 – Estimated failure probability, the confidence intervals and estimation error under aging risk.

Fig. 14 – Estimated failure probability for different dam classes, correlation and water level subjected to seismic hazard.

(i.e., Hw ¼ 100 m), the probability of failure under relatively and KNN using crude MCS have high accuracy in estimating the
small agm values (i.e., 0.1) is already 40%. failure probability Pf. Furthermore, NBC using MCS-IS provides
As in Section 5.1.2, four sets of experiments are performed: high accuracy estimates when Pf is relatively large.
(1) SVM using crude MCS, (2) SVM using MCS-IS, (3) KNN using
crude MCS, and (4) NBC using MCS-IS. A set of Ntrn = 1e4 data
6. Concluding remarks and future work
points are chosen randomly from the total of 1e6 data points
for training classifiers using crude MCS. Learning classifiers
using MCS-IS is based on random sampling of Ntrn = 1e3 This paper presented a simplified reliability analysis frame-
training data points from 1e4 data points. work for concrete gravity dams subjected to multi (hydrologi-
In Fig. 15, the mean of Pf estimation error over 50 trials is cal, aging, and seismic) hazard risk. A group of time-variant
reported for different combinations of ðHw ; agm Þ. From Fig. 14, it degradation models were proposed for different random
is observed that small values of Hw and agm lead to very small variables. Limit state functions were presented in the explicit
failure probabilities. Therefore, as mentioned before, the form. Both the classical reliability analyses techniques and the
training procedure gets more difficult due to the small number machine learning methods were applied.
of training data points. However, when the values of Hw and For each of three hazard risk, three types of models are
agm increases, more training data points are available from considered: (1) pilot model, (2) generalized model, and (3)
both classes (safe/fail) to learn a reliable and accurate analytical model. The framework for each model is entirely
classification rule. This phenomenon can be seen in Fig. 15 explained in hydrological hazard. However, only the pilot
since all classifiers have higher estimation error for small model is explored for aging and seismic hazards. For less
values of ðHw ; agm Þ. However, as these quantities get larger, the experienced readers the following algorithm explains the step
estimation error decreases and it gets very close to zero. by step procedure to develop all three models:
The SVM classifier using MCS-IS has the best performance
among four different classification techniques based on Fig. 15 1. Quantify the uncertainties in the system and determine the
(b). This observation is consistent with the results in Sec- distributional models, Table 1.
tion 5.1.2, where SVM using MCS-IS has the highest accuracy as a. System uncertainties, e.g. dimensions, location of
well. Therefore, importance sampling is a successful technique drainage, crack, resistance parameters, etc.
to preserve the global structure of data using small numbers of b. Hydrological hazard uncertainty, i.e. pool water height.
training data points for both hydrological and seismic hazards. c. Aging hazard uncertainty, i.e. aging time.
Furthermore, similar to Section 5.1.2, it is concluded that SVM d. Seismic hazard uncertainty, i.e. seismic coefficient.
608 archives of civil and mechanical engineering 18 (2018) 592–610

Fig. 15 – Failure probability estimation error for different water levels subjected to seismic hazard.

2. Select the pilot model. Many of gravity dams have HB1 of  NBC is a simple probabilistic classification technique.
1
about 0.7. Thus, B1 = 70 m and H1 = 100 m can be a good However, the performance of this method for small failure
starting point. probabilities is not as accurate as other classification
3. Determine the specific demand-affecting parameter in each techniques studied in this work.
hazard model (i.e. water level, age, seismic coefficient).  Accounting for the cumulative failure probability increases
4. Perform a set of probabilistic simulations based on MCS and the value of Pf considerably compared to the instant failure
develop a sigmoid-type failure model, where the vertical mode.
axis presents Pf and the horizontal one is demand-affecting  Accounting for the correlation between the random
parameter. Such plots are shown in Figs. 13(c) and 14(b). variables increases the failure probability at the smaller Pf,
5. Determine the possible range of dam width and height and while it decreases the failure probability for near collapse
build a matrix of dam classes, Fig. 3. cases.
6. Repeat steps 1 to 5 for any individual combination of B1 and  Results of reliability analysis are generalized for different
H1. Use the machine learning techniques to reduce the classes of gravity dams. A sigmoid-type function is further
computational efforts. proposed and only applied to the hydrological hazard. This
7. Present the ‘‘generalized’’ model as Pf = f(B1, H1, Di), in model can be helpful for practitioners by providing a general
which Di refers to Hw in hydrological model, t in aging solution for different dam types.
model, and agm in seismic model. Such plots are shown in
Fig. 8(b) and (c). The following remarks can be considered in future
8. Fit a sigmoid-type curve (or surface) to the data and develop research:
the ‘‘analytical’’ model, Figs. 10(a) and (12). There are
different models to be used for this purpose. This paper  Despite the superior performance of SVM technique, it is still
presented a new model based on Eq. (35); however, a not a precise classification algorithm specially for ‘‘rare
comprehensive set of models with their pros and cons can events’’. One reason can be attributed to the fact that
be found in Hariri-Ardebili [49]. determination of a clear separation using linear functions in
a projected high-dimensional feature space is difficult to
The following summarizes the main observations and achieve. Applications of other machine learning techniques
results: such as artificial neural networks can be investigated to
support the findings in this paper or to improve the
 In the conducted experiments, SVM has the highest classification.
accuracy among three classification techniques for estimat-  Applications of advance analysis techniques such as finite
ing the failure probability. This is mainly because of two element, finite difference, etc. can be combined with the
reasons: (1) SVM takes into account the global structure of framework proposed in this paper to improve the quality of
the data by finding a separating hyperplane, and (2) SVM can results and classify based on any desired response quantity
be used to deal with the nonlinear structure of the data using (e.g. internal stresses and pore water pressure).
nonlinear kernels such as the polynomial kernel function.  The interaction between the aging hazard and either
 The performance of SVM can be improved using the hydrological or seismic can be accounted for. This
importance sampling technique. This improvement is more allows to quantify the reliability of the existing old and
significant for small failure probabilities since the impor- deteriorated gravity dams. Such a framework is already
tance sampling technique provides a much better training proposed by Hariri-Ardebili [2] for the capacity function
data set to distinguish failure from non-failure cases. It is of dams and Ghosh and Padgett [50] for fragility function of
observed that SVM using MCS-IS requires an order of bridges.
magnitude less training data compared to the crude MCS.
 KNN is an extremely simple algorithm which eliminates the
need to solve any optimization problem. However, KNN is
Ethical statement
based on the local structure of data and for this reason it can
only be used in conjunction with crude MCS. Therefore, KNN
often requires more training data points specifically for Authors state that the research was conducted according to
small failure probabilities to achieve high accuracy. ethical standards
archives of civil and mechanical engineering 18 (2018) 592–610 609

analysis of an RCC gravity dam construction. Modelling of


Acknowledgements temperature, hydration degree and ageing degree fields, Eng.
Struct. 65 (2014) 99–110.
[17] J. Mata, N.S. Leit ao, A.T. de Castro, J.S. da Costa, Construction
The first author would like to express his sincere appreciation
of decision rules for early detection of a developing concrete
to his prior advisor (and the current mentor), Professor Victor arch dam failure scenario. A discriminant approach, Comput.
E. Saouma at the University of Colorado Boulder for his Struct. 142 (2014) 45–53.
enthusiastic guidance and advice throughout this research. [18] R. Melchers, Simulation in time-invariant and time-variant
reliability problems, in: Reliability and Optimization of
Structural Systems' 91, Springer, 1992 39–82.
references [19] S. Marelli, R. Schobi, B. Sudret, UQLab user manual - Structural
Reliability, Tech. Rep., Chair of Risk, Safety and Uncertainty
Quantification, ETH Zurich, report UQLab-V0. 92-107, 2016.
[20] R.E. Melchers, Structural Reliability Analysis and Prediction,
[1] FERC-PFMA, FERC guidance document: potential failure
John Wiley & Son Ltd, 1999.
mode analysis, Tech. Rep., Federal Emergency Regulatory
[21] J. Jeppsson, Reliability-based assessment procedures for existing
Committee, 2005.
concrete structures (Ph.D. thesis), Lund University, 2003.
[2] M.A. Hariri-Ardebili, Performance based earthquake
[22] Q. Li, C. Wang, B.R. Ellingwood, Time-dependent reliability of
engineering for concrete dams (Ph.D. thesis), University of
aging structures in the presence of non-stationary loads and
Colorado, Boulder, CO, 2015.
degradation, Struct. Saf. 52 (2015) 132–141.
[3] R. Charlwood, Predicting the long term behaviour and service
[23] A. Harbitz, An efficient sampling method for probability of
life of concrete dams, in: Proceedings of the 2nd International
failure calculation, Struct. Saf. 3 (2) (1986) 109–115.
Conference in Long Term Behavior of Dams, Graz, Austria,
[24] M. McKay, R. Beckman, W. Conover, A comparison of three
2009.
methods for selecting values of input variables in the
[4] ASDSO, State and federal oversight of dam safety must be
analysis of output from a computer code, Technometrics 21
improved, Magazine of Association of State Dam Safety
(2) (1979) 239–245.
Officials (ASDSO).
[25] S.-K. Au, J.L. Beck, Estimation of small failure probabilities in
[5] M.A. Hariri-Ardebili, M.R. Kianoush, Integrative seismic
high dimensions by subset simulation, Probab. Eng. Mech. 16
safety evaluation of a high concrete arch dam, Soil Dyn.
(4) (2001) 263–277.
Earthq. Eng. 67 (2014) 85–101.
[26] Y. Lee, D. Hwang, A study on the techniques of estimating the
[6] K. Bury, H. Kreuzer, Assessing the failure probability of
probability of failure, J. Chungcheong Math. Soc. 21 (4) (2008)
gravity dams, Int. Water Power Dam Construct. 37 (11)
573–583.
(1985) 46–50.
[27] G.I. Schuëller, R. Stix, A critical appraisal of methods to
[7] V. Saouma, Reliability based nonlinear fracture mechanics
determine failure probabilities, Struct. Saf. 4 (4) (1987) 293–309.
analysis of a concrete dam; a simplified approach, Dam Eng.
[28] R. Melchers, Search-based importance sampling, Struct. Saf. 9
16 (3) (2006) 219–241.
(2) (1990) 117–128.
[8] C. Carvajal, L. Peyras, C. Bacconnet, J. Bécue, Probability
[29] S. Au, J.L. Beck, A new adaptive importance sampling scheme
modelling of shear strength parameters of RCC gravity dams
for reliability calculations, Struct. Saf. 21 (2) (1999) 135–158.
for reliability analysis of structural safety, Eur. J. Environ.
[30] T. Hastie, R.J. Tibshirani, J.H. Friedman, The Elements of
Civil Eng. 13 (2009) 91–119.
Statistical Learning: Data Mining, Inference, and Prediction,
[9] L. Peyras, C. Carvajal, H. Felix, C. Bacconnet, P. Royet, J. Becue,
Springer, 2011.
D. Boissier, Probability-based assessment of dam safety using
[31] K. Tanioka, H. Yadohisa, Effect of data standardization on the
combined risk analysis and reliability methods – application
result of k-means clustering, in: Challenges at the Interface of
to hazards studies, Eur. J. Environ. Civil Eng. 16 (2012) 795–817.
Data Analysis, Springer, (2012) 59–67.
[10] L. Altarejos-Garcia, I. Escuder-Bueno, A. Serrano-Lombillo, M.
[32] X. Wu, et al., Top 10 algorithms in data mining, Knowl. Inf.
de Membrillera-Ortuno, Methodology for estimating the
Syst. 14 (1) (2008) 1–37.
probability of failure by sliding in concrete gravity dams in
[33] B. Schölkopf, A. Smola, Learning with Kernels: Support Vector
the context of risk analysis, Struct. Saf. 36–37 (2012) 1–13.
Machines, Regularization, Optimization, and Beyond, MIT
[11] C. Yu, Time-variant finite element reliability for performance
press, 2002.
degradation assessment of concrete gravity dam, in: 2015
[34] C.M. Bishop, Pattern Recognition and Machine Learning,
Fifth International Conference on Instrumentation and
Information Science and Statistics, Springer-Verlag New
Measurement, Computer, Communication and Control
York, Inc., 2006, , ISBN: 0387310738.
(IMCCC), IEEE, 2015 73–76.
[35] F. Pourkamali-Anaraki, S. Hughes, Kernel compressive sensing,
[12] A. Krounis, Sliding stability re-assessment of concrete dams
in: IEEE International Conference on Image Processing, 2013,
with bonded concrete-rock interfaces (Ph.D. thesis), KTH
494–498.
Royal Institute of Technology, 2016.
[36] F. Pourkamali-Anaraki, S. Becker, A randomized approach to
[13] F. Salazar, M. Toledo, E. Oñate, R. Morán, An empirical
efficient kernel clustering, in: IEEE Global Conference on
comparison of machine learning techniques for dam
Signal and Information Processing, 2016, 207–211.
behaviour modelling, Struct. Saf. 56 (2015) 9–17.
[37] J. Spross, A Critical Review of the Observational Method
[14] F. Salazar, R. Morán, M.Á. Toledo, E. Oñate, Data-based
(Licentiate dissertation), 2014 Stockholm. Retrieved from
models for the prediction of dam behaviour: a review and
http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-144207.
some methodological considerations, Arch. Comput.
[38] M. Westberg, Reliability-based assessment of concrete
Methods Eng. (2015) 1–21.
dam stability, Licenciate Thesis, Division of Structural
[15] V. Saouma, E. Hansen, B. Rajagopalan, Statistical and 3d
Engineering, Lund Institute of Technology, Lund University,
nonlinear finite element analysis of Schlegeis dam, in:
2010 Report TVBK-1033.
Proceedings of the Sixth ICOLD Benchmark Workshop on
[39] M. Westberg Wilde, F. Johansson, System reliability
Numerical Analysis of Dams, 2001, 17–19.
of concrete dams with respect to foundation stability:
[16] A. Gaspar, F. Lopez-Caballero, A. Modaressi-Farahmand-
application to a spillway, J. Geotech. Geoenviron. Eng. 139
Razavi, A. Gomes-Correia, Methodology for a probabilistic
(2) (2013) 308–319.
610 archives of civil and mechanical engineering 18 (2018) 592–610

[40] S. Huaizhi, H. Jiang, Z. Wen, Service life predicting of dam [45] H. Westergaard, Water pressures on dams during
systems with correlated failure modes, ASCE J. Perform. earthquakes, Trans. Am. Soc. Civil Eng. 98 (1933) 418–433.
Construct. Facil. 27 (2013) 252–269. [46] MATLAB, version 9.1 (R2016b), The MathWorks Inc., Natick,
[41] A. Morales-Torres, I. Escuder-Bueno, L. Altarejos-Garcia, A. MA, 2016.
Serrano-Lombillo, Building fragility curves of sliding failure [47] J. Han, C. Moraga, The influence of the sigmoid function
of concrete gravity dams integrating natural and epistemic parameters on the speed of backpropagation learning, in:
uncertainties, Eng. Struct. 125 (2016) 227–235. International Workshop on Artificial Neural Networks,
[42] S.-I. Yang, D.M. Frangopol, L.C. Neves, Service life prediction Springer, 1995 195–201.
of structural systems using lifetime functions with emphasis [48] H. Akaike, A new look at the statistical model identification,
on bridges, Reliab. Eng. Syst. Saf. 86 (1) (2004) 39–51. IEEE Trans. Autom. Control 19 (6) (1974) 716–723.
[43] E. Bretas, A. Batista, J. Lemos, P. Léger, Seismic analysis of [49] M.A. Hariri-Ardebili, Analytical failure probability model for
gravity dams: a comparative study using a progressive generic gravity dam classes, Proc. Inst. Mech. Eng. Part O: J.
methodology, in: Proc. of the EURODYN 2014 – 9th Risk Reliab. 231 (5) (2017) 546–557.
International Conference on Structural Dynamics, Oporto, 2014. [50] J. Ghosh, J. Padgett, Aging considerations in the development
[44] USACE, Gravity Dam Design, Tech. Rep. EM 1110-2-2200, of time-dependent seismic fragility curves, J. Struct. Eng. 136
Department of the Army, U.S. Army Corps of Engineers, (2010) 1497–1511.
Washington, D.C., USA, 1995.

You might also like