Lecture Notes on SIAM

BARC Advanced Elective Course, May-July, 2004
20.1
Chapter 20: Structural Reliability

Author: Rohit Rastogi
1
, RSD, BARC.

20.1 Introduction

For many years it has been assumed in design of structural systems that all loads and
strengths are deterministic. The strength of an element was determined in such a way
that it exceeded the load with a certain margin. The ratio between the strength and the
load was denoted the safety factor. This number was considered as a measure of the
reliability of the structure. In codes of practice for structural systems values for loads,
strengths and safety factors are prescribed. These values are traditionally determined
on the basis of experience and engineering judgment. However, in new codes partial
safety factors are used. Characteristic values of the uncertain loads and resistances are
specified and partial safety factors are applied to the loads and strengths in order to
ensure that the structure is safe enough. The partial safety factors are usually based on
experience or calibrated to existing codes or to measures of the reliability obtained by
probabilistic techniques.

Table 20.1: Some Risks in Society [20.1]
Activity Approximate death rate
(x 10
-9
deaths/h
exposure)
Typical
exposure
(h/year)
Typical risk of
death
(x 10
-6
per year)
Alpine climbing 30000–40000 50 1500-2000
Boating 1500 80 120
Swimming 3500 50 170
Cigarette smoking 2500 400 1000
Air travel 1200 20 24
Car travel 700 300 200
Train travel 80 200 15
Coal mining (UK) 210 1500 300
Construction
work
70-200 2200 150-440
Manufacturing 20 2000 40
Building fires 1-3 8000 8-24
Structural failures 0.02 6000 0.1

As described above structural analysis and design have traditionally been based on
deterministic methods. However, uncertainties in the loads, strengths and in the
modeling of the systems require that methods based on probabilistic techniques in a
number of situations have to be used. A structure is usually required to have a
satisfactory performance in the expected lifetime, i.e. it is required that it does not
collapse or becomes unsafe and that it fulfills certain functional requirements.
Generally structural systems have a rather small probability that they do not function
as intended, see table 20.1.


1
rrastogi@apsara.barc.ernet.in
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.2
Reliability of structural systems can be defined as the probability that the structure
under consideration has a proper performance throughout its lifetime. Reliability
methods are used to estimate the probability of failure. The information of the models,
which the reliability analyses are based on, is generally not complete. Therefore the
estimated reliability should be considered as a nominal measure of the reliability and
not as an absolute number. However, if the reliability is estimated for a number of
structures using the same level of information and the same mathematical models,
then useful comparisons can be made on the reliability level of these structures.
Further design of new structures can be performed by probabilistic methods if similar
models and information are used as for existing structures, which are known to
perform satisfactory. If probabilistic methods are used to design structures where no
similar existing structures are known then the designer has to be very careful and
verify the models used as much as possible.

The reliability estimated as a measure of the safety of a structure can be used in a
decision (e.g. design) process. A lower level of the reliability can be used as a
constraint in an optimal design problem. The lower level of the reliability can be
obtained by analyzing similar structures designed after current design practice or it
can be determined as the reliability level giving the largest utility (benefits – costs)
when solving a decision problem where all possible costs and benefits in the expected
lifetime of the structure are taken into account.

In order to be able to estimate the reliability using probabilistic concepts it is
necessary to introduce stochastic variables and/or stochastic processes/fields and to
introduce failure and non-failure behavior of the structure under consideration.

Generally the main steps in a reliability analysis are:

1) Select a target reliability level.
2) Identify the significant failure modes of the structure.
3) Decompose the failure modes in series systems of parallel systems of
single components (only needed if the failure modes consist of more than
one component).
4) Formulate failure functions (limit state functions) corresponding to each
component in the failure modes.
5) Identify the stochastic variables and the deterministic parameters in the
failure functions. Further specify the distribution types and statistical
parameters for the stochastic variables and the dependencies between
them.
6) Estimate the reliability of each failure mode.
7) In a design process change the design if the reliabilities do not meet the
target reliabilities. In reliability analysis the reliability is compared with
the target reliability.
8) Evaluate the reliability result by performing sensitivity analyses.

The single steps are discussed below.

Typical failure modes to be considered in a reliability analysis of a structural system
are yielding, buckling (local and global), fatigue, fracture and excessive deformations.

Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.3
The failure modes (limit states) are generally divided in:

Ultimate limit states
Ultimate limit states correspond to the maximum load carrying capacity, which can be
related to e.g. formation of a mechanism in the structure, excessive plasticity, rupture
due to fatigue and instability (buckling).

Conditional limit states
Conditional limit states correspond to the load-carrying capacity if a local part of the
structure has failed. A local failure can be caused by an accidental action or by fire.
The conditional limit states can be related to e.g. formation of a mechanism in the
structure, exceeding of the material strength or instability (buckling).

Serviceability limit states
Serviceability limit states are related to normal use of the structure, e.g. excessive
deflections, local damage and excessive vibrations.

The fundamental quantities that characterize the behavior of a structure are called the
basic variables and are denoted X = (X
1
, X
2
, X
3
… X
n
) n is the number of basic
stochastic variables. Typical examples of basic variables are loads, strengths,
dimensions and materials. The basic variables can be dependent or independent, see
below where different types of uncertainty are discussed. In probabilistic analysis
these stochastic variables are represented using probability distribution functions.
These need to derived for each variable using field data.

The uncertainty modeled by stochastic variables can be divided in the following
groups:

Physical uncertainty: or inherent uncertainty is related to the natural randomness of
a quantity, for example the uncertainty in the yield stress due to production
variability.

Measurement uncertainty: is the uncertainty caused by imperfect measurements of
for example a geometrical quantity.

Statistical uncertainty: is due to limited sample sizes of observed quantities.

Model uncertainty: It is the uncertainty related to imperfect knowledge or
idealizations of the mathematical models used or uncertainty related to the choice of
probability distribution types for the stochastic variables.

The above types of uncertainty are usually treated by the reliability methods, which
will be described in the following sections. Another type of uncertainty, which is not
covered by these methods, is gross errors or human errors. These types of errors can
be defined as deviation of an event or process from acceptable engineering practice.

Generally, methods to measure the reliability of a structure can be divided in four
groups

Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.4
Level I methods: The uncertain parameters are modeled by one characteristic value, as
for example in codes based on the partial safety factor concept.

Level II methods: The uncertain parameters are modeled by the mean values and the
standard deviations, and by the correlation coefficients between the stochastic
variables. The stochastic variables are implicitly assumed to be normally distributed.
The reliability index method is an example of a level II method.

Level III methods: The uncertain quantities are modeled by their joint distribution
functions. The probability of failure is estimated as a measure of the reliability.

Level IV methods: In these methods the consequences (cost) of failure are also taken
into account and the risk (consequence multiplied by the probability of failure) is used
as a measure of the reliability. In this way different designs can be compared on an
economic basis taking into account uncertainty, costs and benefits.

Level I methods can e.g. be calibrated using level II methods, level II methods can be
calibrated using level III methods, etc.

Level II and III reliability methods are considered in these lectures. Several
techniques can be used to estimate the reliability for level II and III methods, e.g.

Simulation techniques: Samples of the stochastic variables are generated and the
relative number of samples corresponding to failure is used to estimate the probability
of failure. The simulation techniques are different in the way the samples are
generated.

FORM techniques: In First Order Reliability Methods the limit state function (failure
function) is linearized and the reliability is estimated using level II or III methods.

SORM techniques: In Second Order Reliability Methods a quadratic approximation
to the failure function is determined and the probability of failure for the quadratic
failure surface is estimated.

20.2 Stochastic Variables
Consider a continuous stochastic variable X. The distribution function of X is
denoted F
X
(x) and gives the probability F
X
(x) = P(X ≤ x). A distribution function is
illustrated in figure 20.1. The density function f
X
(x) is illustrated in figure 20.2 and is
defined by eq.(20.1).


( )
( )
X
X
dF x
f x
dx
= (20.1)

Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.5

Figure 20.1: Distribution Function F
X
(x)


Figure 20.2: Density Function f
X
(x)
The expected value u
X
is defined by eq.(20.2).

( )
X X
xf x d u
+∝
−∝
=

x (20.2)

The variance σ
2
x
is defined by eq.(20.3).

( ) ( )
2
2
X X X
x f x dx σ u
+∝
−∝
= −

(20.3)

σ is the standard deviation.
The coefficient of variation V
X
is defined as (eq. )


x
X
x
V
σ
u
= (20.4)
20.2.1 Example: Probability of Failure, Fundamental Case
Consider a structural element with load bearing capacity R which is loaded by the
load S. R and S are modeled by independent stochastic variables with density
functions fR and f
S
and distribution functions F
R
and F
S
, see figure 20.3. The
probability of failure becomes

( ) ( ) ( ) ( ) ( ) ( )
f R S
P P failure P R S P R x P x S x dx dx F x f x d
∝ ∝
−∝ −∝
= = ≤ = ≤ ≤ ≤ + =
∫ ∫
x (20.5)
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.6



Figure 20.3: Density Functions for Fundamental Case
Alternately probability of failure can be evaluate as
(20.6)
( ) ( ) ( ) ( )
( ) ( ) 1 ( ) 1 ( ) ( )
f
R S R S
P P failure P R S P x R x dx P S x dx
f x F x dx f x F x dx

−∝
∝ ∝
−∝ −∝
= = ≤ = ≤ ≤ + ≥
= − = −

∫ ∫

It is noted that it is important that the lower part of the distribution for the strength
and the upper part of the distribution for the load are modeled as accurate as possible.

Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.7

20.2.2 Some Important Probability Distributions


Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.8
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.9


Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.10



Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.11





Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.12





20.2.3 Covariance and Correlation
The covariance between X
1
and X
2
is defined by

| | ( )( )
1 2 1 1 2 2
, Cov X X E X X u u = − −

(20.7)
It is seen that


| | | |
2
1 1 1 1
, Cov X X Var X σ = = (20.8)

The correlation coefficient between X
1
and X
2
is defined by


| |
1 2
1 2
1 2
,
1 2
,
,
1 1
X X
X X
Cov X X
ρ
σ σ
ρ
=
− ≤ ≤
(20.9)
It is the measure of linear dependence between X
1
and X
2
.
If ρ
X1,X2
= 0 then X
1
and X
2
are uncorrelated but not necessarily statistically
independent. For a stochastic vector X = (X
1
, X
2
, …. X
n
) the covariance matrix is
defined by


| | | | | |
| | | | |
| | | | |
1 1 1 2 1
1 2 2 2 2
1 2
, ,
, ,
, ,
n
n
X
n n n
Var X X Cov X X Cov X X
Cov X X Var X X Cov X X
C
Cov X X Cov X X Var X X



=




"
"
# # % #
"
|
|
,
,
,
n
(20.10)

Correspondingly the correlation coefficient matrix is defined by
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.13


1 2 1
1 2 2
1 2
, ,
,
, ,
1
1
1
n
n
n n
X X X X
X X X X
X
X X X X
ρ ρ
ρ ρ
ρ
ρ ρ
,



=




"
"
# # % #
"
(20.11)


A structural component fails when the applied loads ‘S’ exceed its resistance ‘R’ i.e.
R – S < 0. Generally, this condition is expressed in terms of a failure equation or a
limit state function ‘g’. Let g(X) be a function of variables X
1
, X
2
, X
3
,…. X
n
. The
failure is defined by the condition g(X) < 0. The variables X are modeled as stochastic
variables in the probabilistic analysis. The failure probability P
f
is defined by.

( 0)
f
P P g = < (20.12)
It is the probability that the x lie in the failure domain F.
{ | ( ) 0}
f
P F x g x = = < (20.13)

The probability of failure can be written in terms of the marginal density functions f
i

of the stochastic variables X.


1
1 1
( ) ( )... ( ) ...
n
f X X X n
F F
P f d f x f x dx d = =
∫ ∫
x x
n
x (20.14)


Thus the calculation of probability of failure essentially reduces to the calculation of
the integral given by eq.(20.14) . The techniques broadly fall in two categories. First
category includes approximation methods like FORM and SORM. These are also
known as fast probability integration methods (FPI). The second category comprises
of simulation-based methods. These are basically Monte Carlo based methods.

20.3 Fast Probability Integration Methods (FPI)

The first developments of FPI Methods took place almost 30 years ago. Since then the
methods have been refined and extended significantly and by now they form one of
the most important methods for reliability evaluations in structural reliability theory.
Several commercial computer codes have been developed for FPI [20.2-20.2]
methods and the methods are widely used in practical engineering problems and for
code calibration purposes.

20.3.1 Linear Limit State Functions and Normal Distributed
Correlated Variables (Level II Method, Cornell Index)

Consider the case where the limit state function, g(X) is a linear function of the basic
random variables X . Then we may write the limit state function as

Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.14
(20.15)
0
1
( )
n
i i
i
g X a a X
=
= +


The limit state function g is linear. All X
i
’s are distributed based on Normal
distribution. The mean and the standard deviation of g can be calculated as:


0
1
2 2 2
1 1 1,
i
i i
n
g i X
i
n n n
g i X ij i j X Xj
i i j j i
a a
a a a
u u
σ σ ρ σ
=
= = = ≠
= +
= +

∑ ∑ ∑
σ
(20.16)

The distribution of g will also be Normal. The probability of failure (P (g< 0)) is
given by:


( ) ( )
0
0
reliability index
g g
f
g g
P P g
u u
β
σ σ
β
| | | | −
= < = Φ = Φ − = Φ −
| |
| |
\ . \ .
=
(20.17)




Figure 20.4: Illustration of β and P
f


This formulation of reliability index is given by Cornell, and is know as Cornell
reliability index. This is valid when all the variables are Normally distributed and the
limit state is linear. In reliability analysis it generally sufficient to mention the value
of reliability index.

Example:

Consider a failure function of the form g = R – S.

R→Normal(3.5, 0.25) and S→Normal(2, 0.3). It is assumed that they are not
correlated, i.e. ρ
RS
= 0.

The Cornell Reliability index is given by β = u
g

g
.

u
g
= 3.5 – 2 = 1.5
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.15
σ
g
= √(0.25
2
+ 0.3
2
) = 0.39
β = 3.84

P
f
= Φ(-3.84) = 6.15x10
-5


20.3.2 Non-Linear Limit State Functions and Normal Distributed
Un-correlated Variables (Level II Method, MVFOSM)

Consider the case where the limit state function, g(X) is a non-linear function of the
basic random variables X . The limit state function can be linearized using the first
order terms of the Taylor’s series about the mean value.

The failure function after linearization can be written as

( ) ( )
1
( )
i
n
X i X
i
i
g
g X g X
X
u u
=

≈ + −


(20.18)
The non-linear g has formulation now similar to eq.(20.15). Thus the Cornell
reliability index can be obtained.

This method is known as Mean-Value-First-Order-Second-Moment method as this
uses first order terms of Taylor series expansion at mean values. Calculation of
reliability index uses the first (mean) and the second moment (standard deviation)
values only.

The Cornell reliability however suffers from the problem of invariance. The results
for g = R-S , g = R
2
-S
2
or g = R/S – 1 will be different.


20.3.3 Non-Linear Limit State Functions and Normal Distributed
Un-correlated Variables (Level II Method, Hasofer-Lind Reliability
Index)
Hasofer and Lind remove the problem of invariance in the Cornell index. The limit
state function g(X) is transformed into the limit state function g(u) by normalization of
the random variables in to standardized normally distributed random variables as


i
i
i X
i
X
X
u
u
σ

= (20.19)
such that the random variables u
i
have zero means and unit standard deviations.

Then the reliability index β has the simple geometrical interpretation as the smallest
distance from the line (or generally the hyper-plane) forming the boundary between
the safe domain and the failure domain, i.e. the domain defined by the failure event. It
should be noted that this definition of the reliability index does not depend on the
limit state function but rather the boundary between the safe domain and the failure
domain. The point on the failure surface with the smallest distance to origin is
commonly denoted the design point.
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.16

The geometrical interpretation is illustrated in Figure 20.4 where a two dimensional
case is considered.


Figure 20.5: Illustration of the two-dimensional case of a linear limit state
function and standardized normally distributed variables U

The estimation of reliability index now essentially is a optimization problem to
estimate the minimum distance of the failure function in the standard normal space
with respect to the origin. An algorithm to arrive at the minimum distance is listed
next.

1. Formulate the limit state function. g(X) = 0
2. Transform all X
i
= u
i
+ u
i
σ
I

3. Transform g(X)→g(u)
4. Assume some initial value of each u
i
. (Typically a start is made with each u
i
=
0)
5. Compute ∂g/∂u
i
for all variables, at the u
i

6. Compute the values of each u
i
for the next iteration using


1
'
2
1
.
n
i
i i
i
n
i
i i
g
u g
u
g
u
u
g
u
=
=
| | ∂

|


\ .
=

| | ∂
|

\ .


(20.20)
7.
2
1
n
i
i
u β
=
=


8. Repeat 5 to 7 until a converged value of β is obtained.


Features of the Hasofer Lind Reliability index

1. This method of determining β is popularly known as First Order Reliability
Method (FORM).
2. The method formulates the non-linear limit function as a tangent hyper plane
at the minimum distance point (most probable point or MPP).
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.17

Figure 6: Hasofer-Lind Reliability Index


3. It requires determination of partial derivatives for the limit state function.
Hence it can be applied only to continuous failure functions.
4. This formulation does not suffer from the problem of invariance.
5. The method fails if there is more than one point on failure surface at minimum
distance from origin.
6. The method is easy to apply and to program.
7. The value of each of the random variables at design point(MPP) denote their
relative effect on failure probability. If the mean value is close to MPP shows
that the P
f
is very much dependent on the random variable. If X*
I
denotes the
value of a variable X
i
at design point, then the available margin on the mean
value is X*
i
/u
i
for load variable and u
i
/X*
i
for strength variables.
8. The relative importance of the variables is calculated using α
i
. Where α
i
is
defined as:




( )
2
1
i
i
n
i
i
g u
g u
α
=
∂ ∂
=
∂ ∂

(20.21)

Example:

Consider a steel rod under pure tension loading. The rod will fail if the applied
stresses on the rod cross-sectional area exceeds the steel yield stress. The yield stress
R of the rod and the loading stress on the rod S are assumed to be uncertain modeled
by uncorrelated normal distributed variables. The cross sectional area of the steel rod
A is also uncertain. The steel yield stress R is normal distributed with mean values and
standard deviation 350 and 35 MPa respectively. The loading S is normal distributed
with mean value and standard deviation of 2 and 0.4 kN. The cross sectional area A is
also assumed normally distributed with mean value and standard deviation 10 and 2
mm
2
.
Solution:

1. Writing the limit state function

g = RA – S
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.18

2. Transform the variables to standard normal space

R = u
R
+ σ
R
u
R

A = u
A
+ σ
A
u
A
S = u
S
+ σ
S
u
S

3. Transforming g in standard normal space

g
u
= (u
R
+ σ
R
u
R
)( u
A
+ σ
A
u
A
) – (u
S
+ σ
S
u
S
)

4. Assuming initial value

u
R
= u
A
= u
S
= 0

5. Finding the partial derivatives

∂g/∂u
R
= ( u
A
+ σ
A
u
A
) σ
R

∂g/∂u
A
= ( u
R
+ σ
R
u
R
) σ
A
∂g/∂u
S
= - σ
S

Rest of the iterative steps is shown in the Table below.

Iteration u
R
u
A
u
S
g
u
∂g/∂u
R
∂g/∂u
A
∂g/∂u
S
u
R
u
A
u
S
β
1. 0.000 0.000 0.000 1500 350 700-400.0 -0.680 -1.359 0.777 1.707
2. -0.680 -1.359 0.777 64.662 254.854 652.427-400.0 -0.562 -1.439 0.882 1.779
3. -0.562 -1.439 0.882 -0.658 249.246 660.643-400.0 -0.546 -1.448 0.877 1.779
4. -0.546 -1.448 0.877 -0.010 248.648 661.762-400.0 -0.544 -1.449 0.876 1.779

20.3.4 Non-linear failure functions with no limitations on Un-
correlated variable distribution (Rackwitz-Fiesseler Method, Level
III Method)

As a further development of the iterative calculation procedure for the evaluation of
the failure probability we need to consider the cases where also non-normally
distributed random variables are present.

One of the commonly used approaches for treating this situation is to approximate the
probability distribution function and the probability density function for the non-
normally distributed random variables by normal distribution and normal density
functions. This methodology is also referred to as Rackwitz-Fiesseler method.
As the design point is usually located in the tails of the distribution functions of the
basic random variables the scheme is often referred to as the “normal tail
approximation”.


Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.19
Consider a random variable X whose density function is denoted by f
X
(x)and
distribution function by F
X
(x) which are non-normal. Denoting the design point of
the random variable X, coming in the FORM iteration by x*, the non-normal
distribution is approximated by an equivalent normal distribution at x*. See figure
below. This is required as the basic Hasofer-Lind reliability index is valid only in the
standard normal space. The objective now is to estimate the mean and the standard
deviation of the equivalent normal distribution. The normal istribuuion is fitted such
that the value of the distribution function of X at x* is equal to that of equivalent
normal distribution function.


Figure 20.7: Equivalent Normal distribution

Equating the distribution functions and their derivatives, to solve for unknowns, mean
and standard deviation of equivalent normal distribution, does this. Eq.(20.22).

( )
( )
* '
*
'
1 *
*
' '
X
X
x
F x
differentiating
x
f x
u
σ
' u
ϕ
σ σ
− | |
= Φ
|
\ .
− | |
=
|
\ .
(20.22)

The equivalent mean and standard deviation are estimate from eq.(20.23).

Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.20

( ) { }
( )
( )
1
1
*
*
* *

.
.
X
eq
X
eq eq X
F x
f x
x F x
where
std normal density function
std normal distribution function
ϕ
σ
u σ
ϕ


Φ

=
= − Φ

=
Φ =
(20.23)

This calculation needs to be performed for each iteration in the Hasofer-Lind
reliability calculation. Thus, the iterative procedure in case of non-normal variables
can be given as:

1. Formulate the limit state function. g(X) = 0
2. Assume initial values of all random variables. Typically, X
i
= u
i
is taken in
first iteration.
3. Find equivalent normal distribution parameters (u
ieq
, σ
ieq
) for all non-normal
random variables. (eq. (20.23))
4. Transform all X
i
= u
ieq
+ u
i
σ
Ieq

5. Transform g(X)→g(u)
6. Compute ∂g/∂u
i
for all variables, at the u
i

7. Compute the values of each u
i
for the next iteration using


1
'
2
1
.
n
i
i i
i
n
i
i i
g
u g
u
g
u
u
g
u
=
=
| | ∂

|


\ .
=

| | ∂
|

\ .


(20.24)
8.
2
1
n
i
i
u β
=
=


9. Transform all u
i
into X
i
space. Go to step 3.
Repeat until a converged value of β is obtained.


Example:

Consider a failure function g = R – S. R follows a Gumbel distribution with mean
value of 1000 and standard deviation of 200. S follows a normal distribution with
mean of 700 and standard deviation of 100. Evaluate the reliability index.


The Gumbel distribution is given by

Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.21

( ) { }
( ) ( ) { }
3
( ) exp exp
( ) exp exp
1.282 1.282
6.41 10
200
0.577
910
R
R
F R R u
f R R u R
u
α
α α α
α
σ
u
u
α

= − − −

= − − − − −

= = = ×
= − =


1. g = R – S
2. Let R = 1000 and S = 700 (mean values), g = R – S = 300
3. Calculating equivalent mean and standard deviation at this point.

F(1000) = 0.5703 using the equation above
f(1000) = 0.002053


| | { }
( )
| |
1
1
0.5703
191.295
0.002503
1000 191.295 0.5703 966.1266
eq
R
eq
R
ϕ
σ
u


Φ
= =
= − Φ =


4. Transforming to standard normal space


( ) ( )
1000 966.1266
0.1771
191.295
700 700
0
100
R
S
e eq
u R R R S S
u
u
g u u
S
u σ u σ

= =

= =
= + − +

5. Calculating partial derivatives

191.295
100
eq u
R
R
u
S
S
g
u
g
u
σ
σ

= =


= − = −



6. Calculating points for next iteration


( ) ( )
2 2
2 2
'
'
191.295 0.1771 ( 100)(0) 300 266.1266
191.295 100 46593.7673
/ 0.0057
191.295( 0.0057) 1.0926
100( 0.0057) 0.57
u u
R S
R S
u u
R S
R
S
g g
A u u g
u u
g g
B
u u
A B
u
u
∂ ∂
= + − = × + − − = −
∂ ∂
| | | | ∂ ∂
= + = + − =
| |
∂ ∂
\ . \ .
= −
= − = −
= − − =


Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.22
7. Calculating β


' 2 ' 2
1.2329
R S
u u β = + =

Next iteration

2. New Values of R and S
R = 966.1266 + 191.295(-1.09265) = 757.1164
S = 700 + 0.57(100) = 757
g = 0.1164
3. Calculating equivalent mean and standard deviation at this point.

F(757.1164) = 0.0696
f(757.1164) = 0.0012
112.4427
923.36
eq
R
eq
R
σ
u
=
=

4. Transforming to standard normal space

1.4785
0.57
R
S
u
u
= −
=

5. Partial derivatives


112.4427
100
eq u
R
R
u
S
S
g
u
g
u
σ
σ

= =


= − = −



6. Points or next iteration
A = -223.36
B = 22643.3617
A/B = -0.0099
u
R
’= -1.1092
u
S
’ = 0.9864
7. Calculating β
β = 1.4843

Similarly further iterations can be performed. Converged value of β is obtained after 5
iterations and is equal to 1.4968.

20.3.5 Non-Linear Limit State Functions and Normal Distributed
Correlated Variables (Level II Method, Hasofer-Lind Reliability
Index)

The situation where basic random variables X are stochastically dependent is often
encountered in practical problems. For normally distributed random variables the joint
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.23
probability distribution function may be described in terms of the first two moments,
i.e. the mean value vector and the covariance matrix. This is, however, only the case
for normally or log-normally distributed random variables.

Considering the case of normally distributed random variables these situations may be
treated completely along the same lines as described in the previous sections. Here a
transformation is required such that the variables are uncorrelated in the standard
normal space.
The correlations between the random variables X are defined in for a correlation
matrix C
X
defined by eq.(20.10).

The transformation from the X space to the u space is made using


{ } { } | |{ }
{ }
{ }
{ }
| |
,
X
X D Q u
X Vector of random variables (correlated)
D Vector of mean values of random variables
u Vector of random variables in uncorrelated standard normal space
Q Lower trangular matrix of C obtained
= +
=
=
=
= from Choleski's decomposition
(20.25)


The steps to calculate the reliability index are given below.

1. Formulate the limit state function. g(X) = 0
2. Obtain [Q] by Choleski decomposition of C
x

3. Transform all {X
i
} ={u
I
} + [Q]{u
i
}
4. Transform g(X)→g(u)
5. Assume some initial value of each u
i
. (Typically a start is made with each
u
i
= 0)
6. Compute ∂g/∂u
i
for all variables, at the u
i

7. Compute the values of each u
i
for the next iteration using

1
'
2
1
.
n
i
i i
i
n
i
i i
g
u g
u
g
u
u
g
u
=
=
| | ∂

|


\ .
=

| | ∂
|

\ .



8.
2
1
n
i
i
u β
=
=


9. Repeat 4 to 8 until a converged value of β is obtained.


Example:

Consider a failure function of the form g = R – S. The statistics of the stochastic
variables R and S is defined below.


Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.24
R = Normal (1000, 200) and S = Normal (700, 100)

Correlation between R and S is 300.

Solution:

1. Formulate the limit state function. g(X) = 0

g = R - S
2. Obtain [Q] by Choleski decomposition of C
x



2
2
200 300
300 100
X
C

=



Applying Choleski’s decomposition


2
11 11 12
2
21 22 22
2
11 11 12
2 2
11 12 11 22
0 200 300
0 300 100
200 0
[ ]
1.5 100
Q Q
Q Q Q
Q Q Q
Q Q Q Q
Solving
Q
Q
=



=

+


=





3. Transform all {X
i
} ={u
I
} + [Q]{u
i
}


1000 200 0
700 1.5 100
1000 200
700 1.5 100
R
S
R
R S
u R
u S
R u
S u u
¦ ¹ ¦ ¹ ¦ ¹
= +
´ ` ´ ` ´ `

¹ ) ¹ ) ¹ )
= +
= + +


4. Transform g(X)→g(u)

300 198.5 100
u R
g u
S
u = + −

This can be solved to get the value of β = 1.35.


Most Probable Point (MPP) of failure

The set value of the random variables after the last iteration of any of the above
algorithms is termed as Most Probable Point for failure. At this point, the value of
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.25
( )
1
n
i i
i
f x
=
Π
is maximum in the failure region. f
i
(x
i
) represents the value of the density
function of i
th
random variable.
20.4 Integration using Monte Carlo Simulation
The most common use for Monte Carlo methods is the evaluation of integrals. This is
also the basis of Monte Carlo simulations (which are actually integrations). The basic
principles hold true in both cases.

Basics:


Standard Monte Carlo Integration

Let us consider an integral in d dimensions:


( )
V be an n dimensional hypercube with 0 x 1
n
V
I f x d x
Let
=
≤ ≤



Monte Carlo integration:

- Generate N random vectors x
i
from flat distribution (0 <= x
i
<=1).
- As N → ∝
( )
1
N
i
i
V
f x I
N
=



- Error is proportional to 1/√N

“Normal” numerical integration methods:

- Divide each axis in n evenly spaced intervals
- Total number of points N = n
d

- Errors proportional to
i. 1/n midpoint rule
ii. 1/n
2
trapezoidal rule
iii. 1/n
4
Simpson method
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.26

- If d is small, Monte Carlo integration has much larger errors than
standard methods.
In practice, MC integration becomes better when d > 6–8.

In practice, N becomes just too large very quickly for the standard
methods: already at d = 10, if n = 10 (pretty small), N = 10
10
. Too
much!

Random Number Generation

Good random numbers play a central part in Monte Carlo simulations. Usually these
are generated using a deterministic algorithm, the numbers generated this way are
called pseudorandom numbers. There are several methods for obtaining random
numbers for Monte Carlo simulations.

Physical random numbers

Physical random numbers are generated from some truly random physical process
(radioactive decay, thermal noise, roulette wheel . . .). Before the computer age,
special machines were built to produce random numbers, which were often published
in books. For example, 1955 the RAND corporation published a book with a million
random numbers, obtained using an electric “roulette wheel”. This classic book is
now available on the net, at

http://www.rand.org/publications/classics/randomdigits/

Physical random numbers are not very useful for Monte Carlo, because:

• The sequence is not repeatable.
• The generators are often slow.
• The quality of the distribution is often not perfect. For example, a sequence of
random bits might have slightly more 0’s than 1’s. This is not acceptable for
Monte Carlo.

Pseudorandom numbers

Almost all of the Monte Carlo calculations utilize pseudorandom numbers, which are
generated using deterministic algorithms. Typically the generators produce a random
integer (with a definite number of bits), which is converted to a floating point number
| |
0,1 x∈ by multiplying with a suitable constant.
The generators are initialized once before use with a seed number, typically an integer
value or values. This sets the initial state of the generator.

The essential properties of a good random number generator are:

Repeatability – the same initial values (seeds) produces the same random
number sequence. This can be important for debugging.

Randomness – random numbers should be
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.27

• from uniform distribution – for example, really homogeneously distributed
between [0,1]
• non-correlated, i.e. independent of each other. This is tough! No
pseudorandom sequence is truly independent.

Long period – the generators have a finite amount of internal state information,
so the sequences must repeat itself after finite period. The period should be
much longer than the amount of numbers needed for the calculation
(preferably).
Insensitive to seeds – the period and randomness properties should not depend
on the initial seed.

Fast

Portability – same results on different computers.


Linear congruential generator

One of the simplest, widely used and oldest generators is he linear congruential
generator (LCG). Usually the language or library “standard” generators are of this
type.

The generator is defined by integer constants a, c and m, and produces a sequence of
random integers Xi via

( )
1
mod
i i
X aX c m
+
= + (20.26)


This generates integers from 0 to (m-1) (or from 1 to (m-1), if c = 0). Real numbers in
[0; 1) are obtained by division f
i
= X
i
/ m.

Since the state of the generator is specified by the integer Xi, which is smaller than m,
the period of the generator is at most m. The constants a, c and m must be carefully
chosen to ensure this. Arbitrary parameters are sure to fail!

A typical generator:

a = 1103515245; c = 12345; m = 2
31

This is essentially a 32-bit algorithm. The cycle time of this generator is m = 2
31
~
2x10
9
.

Random numbers from non-uniform distributions

•Generate z
i
uniform on [0,1)
•Compute
1
( )
i i
x F z

=
•Then the x
i
are distributed according to f(x)
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.28
•To apply, must be able to compute and invert distribution function F

Example:

Consider F(x) = 1-exp(-λx)

The random values of x can be generated from

x = -1/λln(1-v) where v is a uniform random number between 0 and 1.


Example

Consider two probabilistic variables, X and Y. The probability distribution functions
of these are given below.


( )
( )
2
3
0 0
0 1
1 1
0 0
0 1
1 1
X
Y
x
F x x x
x
y
F y y y
y
< ¦
¦
= ≤ ≤
´
¦
>
¹
< ¦
¦
= ≤ ≤
´
¦
>
¹


The failure function is given by

g X Y = −

Estimate the probability of failure using Monte Carlo method. Use the set of 10
uniform random numbers each for X and Y given in the following table.
Knowing the random number for X ‘r
x
’, the value of the random variable X van be
found out by
X = √r
x


Similarly knowing the value of the random number for Y ‘r
y
’, the value of random
variable Y can be found by

Y = (r
y
)
1/3

Uniform
Random
Numbers for X
X Uniform
Random
Numbers for Y
Y Failure
1 0.056 0.237 0.667 0.874 1
2 0.931 0.965 0.260 0.638 0
3 0.159 0.399 0.548 0.818 1
4 0.621 0.788 0.033 0.321 0
5 0.199 0.446 0.468 0.776 1
6 0.307 0.554 0.179 0.563 1
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.29
7 0.476 0.690 0.288 0.660 0
8 0.799 0.894 0.291 0.663 0
9 0.365 0.604 0.162 0.545 0
10 0.080 0.283 0.780 0.920 1

10 such values are obtained and each pair is used in the limit state
function g. Out of these 5 lead to failure. Hence the probability of
failure can be given by 5/10 = 0.5.
Box-Muller Method for generating Normal random variable



( )
( )
1 1 2
2 1 2
1 2
1 2
2ln cos 2
2ln sin 2
, are independent uniform random numbers
, are independent standard normal random variables
u v v
u v v
v v
u u
π
π
= −
= −
(20.27)


Error Estimate in Monte Carlo Method

The error estimate in the Monte Carlo simulation is estimated using the coefficient of
variation of the result. It is given by


( )
1
cov
probability of failure estimated from N simulations
f
f
f
P
NP
P

=
=
(20.28)

Monte Carlo applied to structural mechanics

Consider the following example:


The failure equation can be written as
( ) , , 1.12 g a Kc Kc a σ σ π = −

a, σ and Kc have inherent scatter. Their distribution
functions are represented by F
a
, F
σ
and F
K

respectively.
Procedure:

Generate 3 random numbers [0,1), one for each
random variable.

Fail = Fail+1

Repeat the steps say a 50 times.

Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.30
Probability of failure is given by

Pf = Fail / 50

20.5 Application of Reliability methods in structural analysis

20.5.1 Formulating an Optimized In-Service Inspection Plan

Probabilistic methodology is being increasing used in different industries all over the
world to prepare an optimized in-service inspection (ISI) plan. Since inspection plans
involve shutdown of the plant in most of the cases, it is very much imperative to
inspect minimum without compromising the safety. The need is to address the
following questions:

• What to inspect: Which components in the system are to be inspected.
• How much to inspect: Once the component has been selected, extent of the
inspection needs to be prescribed.
• When to inspect: An optimized inspection schedule needs to be generated.
• Defects detected during inspections need to be repaired?

Traditionally these questions have been addressed mostly on the basis of experience
of the operators and in some cases deterministic methods.

The stress analysis of piping components gives the locations, which see the maximum
stresses. The welds are in general expected to be the initiation sites for defects. Thus
combinations of these and the fatigue usage factors of the different components
typically help in selecting the vulnerable sites. There is however considerable scatter
in the parameters, which help in making the decisions in this manner. The material
properties especially fatigue data and toughness data has considerable uncertainty
associated with it. The fatigue loads coming on to the plant can also not be predicted
with sureness, as is the fatigue crack growth in service. The NDE methods used for
ISI do not detect the defects with high confidence.

Traditional methods consider these uncertainties by incorporating high factor of safety
at each step of analysis leading to very conservative an on certain occasions incorrect
results. Probabilistic methods on the other hand quantify the uncertainties by using the
density functions for the uncertain parameters. These however rely strongly on the
amount of data available for fitting the distributions.

Probabilistic methods help in ranking the relative importance of different locations in
the piping system incorporating the uncertainty associated with the decision
parameters. A typical probabilistic methodology can be given as:

1. Select locations of interest on the piping system using the traditional method
of high stress/inferior material properties and sites for possible degradation
with service life.

Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.31
2. Formulate the time variant damage models for each such site (fatigue, creep,
corrosion, erosion, irradiation etc).


3. Represent the parameters involved in the damage models using suitable
probability density functions.

4. A crack is assumed at each of these locations. The size of the crack can be
estimated from the crack initiation models.

5. There will be a probability for detecting these initiated cracks during the pre-
service inspection (PSI). This probability is generally given as probability of
detection (POD) curves, which are well documented in literature. A typical
POD curve is shown below.


Figure 20.8: Typical POD Curve

6. If a proof test of the piping system is there, the cracks, which will fail during
this test, can be censored, as they will not come into service. The failure can
be determined using fracture mechanics methods such as J-Tearing, R6,
SINTAP etc. depending on the failure mechanism expected.

7. The damaging mechanism are now applied. In cases like the fatigue, the
arrival times of the cyclic loads (transients) also need to be modeled as
stochastic variables. With time, because of the advent of damaging
mechanisms the condition of the structure deteriorates. It can be increasing
flaw depth (fatigue, corrosion), wall thinning (erosion) or degradation of
material properties (irradiation) etc. These need to be calculated with the
passage of time and by using the appropriate damage models. The failure of
the component needs to be evaluated with respect to time using appropriate
failure equations. A curve of probability of failure (P
f
) versus time for each
site of interest can be obtained. This value includes the effect of successive
inspections.

Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.32
A typical P
f
versus time curve in absence of any inspection is shown below.











P
f
(log scale)









Time

Figure 20.9: P
f
versus time (without inspections)

A target value of P
f-Target
can be decided. The inspections can be scheduled such that
P
f
always remain below P
f-Target
. This is shown in the figure below. T
1
, T
2
and T
3
in
this case can be taken as the inspection intervals.



T
3

P
f -Target

P
f
(log scale)
Time
T
1
T
2

Figure 20.10: Optimized ISI intervals
Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.33
This analysis is to be performed for all locations identified earlier. From this
group a selection of more vulnerable ones based on P
f
curves can be made.

20.5.2 Designing based on Load and Resistance Factor Design
(LRFD)


Consider a Design equation

R = L1 + L2 + L3

Let the required factor of safety be 1.5

Thus for safe design:

R/(L1+L2+L3) = 1.5

Let all the variables R, L1, L2, L3 follow a normal distribution.

Coefficient of Variation (σ/ u) is given as
R 0.1
L1 0.1
L2 0.2
L3 0.3

The reliability index for a linear function with all variables normally
distributed is given by
( )
1 2 3
2 2 2 2
1 2
R L L L
R L L L
u u u u
β
σ σ σ σ
− + +
=
+ + +
3


Now we will try to estimate probability of failure for different load
combinations.

Let u
R
= 300. Thus to satisfy design equation we should have Σu
L
= 200.

u
L1
u
L2
u
L3
Σu
L
P
f

200 0 0 200 2.8x10
-3

0 200 0 200 2.3x10
-3

0 0 200 200 6.8x10
-2


This shows that though the design equation is being satisfied in each of the cases,
different value of probability of failure is obtained for different cases. This kind of
an anomaly is addressed by probabilistic methods where instead of keeping a
single factor of safety we can design for a target P
f
value.

Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.34
Alternately margins on each of the parameters can be derived using probabilistic
methods, such that the target P
f
value is always maintained. More is the cov in a
particular variable, more safety margin is required for that variable.

γ
R
R = γ
1
L1 + γ
2
L2 + γ
3
L3

This methodology is used in civil engineering design codes for many years.
Various mechanical design codes are also incorporating this philosophy now.

Example

Table 20.2: Partial Safety Factors of API 579 (2000) for the assessment of crack-like
flaws.


The table above has been taken from the API 579 [20.4] code for safety evaluation
of cracked components. They have generated these partial safety factors using R6
as failure mode [20.5].


20.6 References

[20.1] Melchers, R.E.: Structural Reliability, analysis and prediction. John Wiley &
Sons, New York, 1987.

[20.2] COMREL: Software for Evaluating Component Reliability, Free Demo
Download Available at, www.strurel.de.

[20.3] NESSUS: Probabilistic Analysis software, www.nessus.swri.org

[20.4] API 579: Recommended Practice for Fitness-For-Service, 2000.

Lecture Notes on SIAM
BARC Advanced Elective Course, May-July, 2004
20.35
[20.5] Bloom J. M., “Partial Safety Factors (PSF) and Their Impact on ASME Section
XI, IWB-3610”, Presented at the 2000 ASME Pressure Vessel and Piping Conference
July 23 - 27, 2000 Seattle, Washington

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.