Professional Documents
Culture Documents
www.elsevier.com/locate/strusafe
Abstract
Performance-based design involves the calculation of design parameters to meet one or more performance
criteria, with corresponding specified reliabilities over the service life. This work presents an approach to
reliability calculations in performance-based design, using an importance sampling simulation, with perfor-
mance functions evaluated by localized interpolation of a response database. Four examples are shown,
including two applications to performance-based design in earthquake engineering: design parameters are
obtained to optimally match target reliabilities for three limit states associated with different deformation
levels. # 2002 Elsevier Science Ltd. All rights reserved.
Keywords: Reliability; Performance; Performance-based design; Simulations
1. Introduction
Performance-based design is defined here as the consideration of different performance criteria and
the calculation of design parameters to meet those criteria, with corresponding target reliabilities over
the service life. Thus defined, performance-based design allows a solution customized to the situation
at hand, taking full advantage of the available tools in structural analysis and reliability theory.
Traditional design methods implemented in codes, on the other hand, use ‘‘factored design
equations’’ to either calculate the design parameters or to check the adequacy of a design for given
performance requirements. These design equations introduce factors for nominal demands and
resistances, and target reliabilities are achieved approximately. The factors are optimally obtained
from a minimization of the differences between the target levels and the achieved reliabilities over a
sufficiently large, representative number of ‘‘calibration design points’’. As a consequence, the
actual reliabilities achieved by the code recommendations may vary from situation to situation
0167-4730/02/$ - see front matter # 2002 Elsevier Science Ltd. All rights reserved.
PII: S0167-4730(02)00025-5
206 R.O. Foschi et al. / Structural Safety 24 (2002) 205–218
and may deviate substantially from the targets for cases other than the calibration points. Thus,
although the two design approaches have identical aims, they differ in their implementation and
results. While a codified approach will probably always be required to maintain minimum stan-
dards, it is argued here that the more general implementation of performance-based design allows
a more transparent and flexible approach, since the analysis and the reliability estimation could
be customized to the situation at hand. It can also be argued that this more general approach
promotes innovation, and that it is increasingly facilitated by the continuing improvements in
computing power available to engineers.
In some cases, the traditional design procedure may not even be the result of a reliability-cali-
brated ‘‘factored design equation’’. In earthquake engineering, for example, the usual deterministic
approach depends first on the calculation of demands from an elastic response, and then on mod-
ifying those demands for assumed ductility levels. The modified demands are then used to design
the members. The implementation of performance-based approaches in seismic design would be
based on the specification of criteria in a more explicit manner, using a nonlinear dynamic analysis
for the calculation of the demand and reliability analysis to evaluate the achieved exceedence prob-
ability in each limit state. Should these probabilities be greater than the target, the design parameters
would be changed accordingly. The process could also be formulated as an optimization problem, in
which design parameters would be found directly by minimization of the differences between
achieved reliabilities and the targets specified for each criterion.
The problem of efficiency in evaluating the achieved reliability, for a combination of design
parameters, is then central to performance-based design. Let us assume that a given limit state is
represented by a performance or limit state function G(X), where X is the vector of intervening
random variables. The evaluation of G could be very time-consuming, possibly involving the
calculation of nonlinear structural responses. This work presents an approach to reliability esti-
mation that is, in fact, a combination of a response surface methodology, FORM and Impor-
tance Sampling simulation. The approach is simplified by using a localized representation of the
limit state function, using discrete data for G corresponding to a set of vectors X, obtained from
previous deterministic calculations. Thus, the response data gathering is performed independently
of the reliability calculation and, while the former may be computationally intensive, the second
is not, allowing for a fast evaluation of reliability and, therefore, for efficient implementation of
performance-based design.
In our approach, the probability of failure is estimated by Importance Sampling simulation
around a ‘‘design point’’ obtained through an approximate analysis using a simple response sur-
face and FORM. In this sense, the approach is an extension of work and software development
by Schüeller and Stix [1]. When required, the evaluation of the function G is done here through a
local interpolation procedure using the available deterministic, discrete response data. These can
be augmented by adding more combinations to the set of deterministic results, an approach similar
to that presented by Murotsu et al. [2] in the context of a ‘‘neural network’’ as the basis for the
reliability estimation. General applications of neural networks are discussed, for example, by Pat-
ternson [3] and Haykin [4]. The method proposed here for the interpolation of the deterministic
database can be regarded as the implementation of a simple or basic neural network, trained
locally. The general implementation of such networks, with several hidden layers and neurons,
would be methodologically similar and would just provide a more sophisticated algorithm for the
interpolation of the data.
R.O. Foschi et al. / Structural Safety 24 (2002) 205–218 207
Fi ¼ FðXi Þ; ði ¼ 1; 2; . . . mÞ ð1Þ
F ¼ F0 þ aT ðX X0 Þ þ ðX X0 ÞT bðX X0 Þ ð2Þ
in which F0 is the system response available at X0, the closest vector to the input X. a is the vector
of the coefficients for the linear terms and b the corresponding matrix for the quadratic terms.
For simplicity, this matrix could be taken as diagonal, therefore,
a ¼ ða1 ; a2 ; . . . ; an ÞT ð3Þ
2 3
b1 0 0
6 .. 7
6 0 b2 . 7
b¼6 .. .. 7 ð4Þ
4 . . 05
0 0 bn
A total of 2n unknowns must then be determined. For this we require the datum (X0, F0) plus at
least m=2n data. The localized interpolation from Eq. (2) ensures that the function at the closest
point is calculated exactly. The vector a and the coefficients in the matrix b are obtained by
optimization, minimizing the sum of the square of the errors or differences between the actual
data and the prediction from the localized surface. Note that, in a sense, the accuracy in the
interpolation is maximized since the interpolating surface is anchored at the closest data point.
This accuracy is improved if more data are available, which is easily achieved by adding more
deterministic results. The local interpolation discussed here is also a strategy similar to that
introduced in several works on machine learning [5,6]. In the context of a neural network, the
proposed procedure is similar to a local ‘‘training’’.
The following steps accomplish the local interpolation of the response at a point X:
1. Ranking the response data Xi (i=1, 2,. . .m) according to their distance to X, in a sequence
from the closest to the farthest.
208 R.O. Foschi et al. / Structural Safety 24 (2002) 205–218
2. Selecting the first 2n+1 data, together with their corresponding functions Fi (i=1, 2,. . .m),
assigning the closest to the anchor datum X0.
3. Calculating the coefficients in a and b, as per Eqs. (3) and (4).
4. Calculating the predicted response F, as per Eq. (2).
The accuracy of the calculated response will depend on m, the number of vectors Xi in the
database and on whether, during the simulation, the calculated function is the result of an extra-
polation. The latter can be avoided by ensuring that the discrete data for F cover a sufficiently wide
domain of combinations X, taking into account the ranges likely to be encountered in a specific
design situation. Special characteristics of the problem could also be taken into account: for
example, in earthquake engineering, the most important random variable is normally the peak
acceleration associated with the ground motion (PGA). The number of samples for this variable
should then be greater than for the others. These, in turn, could be sampled by considering their
mean values k, where k is the number of standard deviations (e.g., 3.0). Furthermore, in order
to introduce additional consistency into the interpolation, the deterministic database should be
augmented with zero responses for PGA=0.0, for all values of the remaining variables.
Using the local interpolation, the proposed procedure for reliability evaluation is as follows.
1. The database is scanned to determine the combination of the variables Xi, (i=1,2,. . .n),
which results in the performance function to be G(X)0. This will determine a point P. If
no such point P is found, then the database must be expanded, because for the limit state
studied it produces either all positive or all negative values of G(X), a case which would
lead to extrapolations.
2. Point P is then chosen as the anchor for an approximate, quadratic response surface over
the entire domain [7]:
X
N X
N
F ¼ c0 þ ci Xi þ di Xi2 ð5Þ
i¼1 i¼1
The coefficients in Eq. (5) are calculated by developing 2n+1 values of F, using the data-
base and the local interpolation procedure. The values are chosen at deviations of +k and
k from P, for each of the variables, without stepping out of the database domain.
3. The approximate response surface from Eq. (5) is used with FORM to find a design point
Q, to serve as the anchor for an estimation of the failure probability using Importance
Sampling simulation. Having obtained the failure probability, the result is finally converted
to an associated reliability index .
Thus, FORM is only used with the quadratic surface of Eq. (5) to obtain an approximate
design point Q around which the simulation is carried out. The simulation requires only evalua-
tions of the performance function, not gradients, thus avoiding convergence problems in FORM
potentially created by peaks and valleys in the real G(X). The simulation is efficiently carried out
by using the deterministic database and the procedure for local interpolation of the discrete data.
The forward reliability calculation method just described can also be used to implement the
solution of the inverse problem: given the performance criteria and associated target reliability
R.O. Foschi et al. / Structural Safety 24 (2002) 205–218 209
levels T, determine a set of design parameters to optimally satisfy the criteria with the desired
reliabilities. A discussion of the inverse problem has been presented by Li and Foschi [8]. Let {d}
be the vector of design parameters. These can then be obtained by optimization, minimizing the
objective function ,
ND
X 2
¼ Tj j ðdÞ ð6Þ
j¼1
in which ND is the number of performance criteria. The design parameters should be sought within
appropriate bounds. Any constrained optimization algorithm can be used for this purpose. To avoid
the violation of the bounds during the optimization, it is simpler and more robust to use a gradient-
free algorithm. In the examples shown in this work, a genetic algorithm [9] has been used.
It is possible that the optimization algorithm just described may not be very efficient, since it
requires repeated forward calculations of the reliability for each selection of the design vector
{d}. An alternative approach would be the a-priori calculation of the index for different choices
of {d}, developing a reliability database in a manner similar to the original response databases.
The reliability data can then be efficiently accessed by the optimization procedure, also using the
interpolation algorithm discussed here, or a more general neural network approach.
The inverse approach, when the response (and reliability) databases have been already
obtained, permits the development of very quick performance-based design software. In the end,
this is of fundamental importance for the acceptance of performance-based design in solving
everyday practical problems. Four application examples will now be discussed, illustrating the
forward problem of calculating reliability for a given design and the inverse problem of calculating
design parameters given specific reliability targets.
3. Application examples
To introduce the approach, consider first a simple mathematical problem with three random
variables and two limit state functions, G1 and G2,
in which X(1) is normal (3.0, 0.3), X(2) is also normal (2.0, 0.2), and X(3) is lognormal (10.0, 2.0).
Table 1 shows the reliability indices obtained using these functions explicitly, either with
FORM or with a Montecarlo simulation (106 samples). In order to use the interpolated response
surface approach discussed in this work, two databases were constructed: one for F1=X(1) X(2)
and the other for F2=X(1)2 X(2), with a total of 28 combinations of X(1) and X(2) in the interval
0.0 to 5.0. X(3) was treated independently, without including it in the databases. Table 1 also
210 R.O. Foschi et al. / Structural Safety 24 (2002) 205–218
Table 1
Results, example 1, forward problem
shows the reliability indices obtained with either a linear (b=0) or a quadratic local interpolation.
The results are quite comparable, even with a small sample size for the databases. This example is
only shown here as a benchmark since, of course, there is little advantage in constructing independent
databases when the limit state functions are readily and explicitly available.
It is also possible to formulate here an inverse problem representative of one in performance-based
design. Assuming that X(1) and X(2) are normally distributed with a coefficient of variation 0.20,
what should their corresponding means (the ‘‘design parameters’’) be so that target reliabilities of
1.645 are achieved for each of the limit states? Table 2 shows the results of the optimization, with
required means of 2.623 and 2.403, respectively.
A second example considers serviceability limit states for a tall building as shown in Fig. 1. The
structure receives a triangular, distributed load from the left, with maximum intensity q, and
vertical loads P at the roof level. Both load intensities are random with, respectively, Extreme
Type I and normal distributions. Other random variables are the modulus of elasticity E of the
columns, and their cross-sectional width B and depth H, allowing for construction variability.
Over the height of the building (80 m in 20 stories), the columns have three different cross-sec-
tions, with dimensions Bi, Hi (i=1,3), with column size remaining constant for the first 7 floors,
then from the 8th through the 14th, and finally from the 15th to the top. Thus, a total of 9 ran-
dom variables are considered and Table 3 shows their statistics. To control serviceability and
minor damage, the design criteria involve the overall building drift at the roof level, plus the inter-
story drifts at the 15th and at the 5th floor levels. The deformation of the building must take into
account P-delta effects. The limit states were expressed as
Table 2
Results, example 1, inverse problem
Table 3
Example 2, variable statistics
in which 1 is the building drift at the roof level, 2 the inter-story drift at the 15th floor, and 3
is the inter-story drift at the 5th floor. 2 and 3 are expressed as fractions of the story height.
Databases for the relevant response information were constructed using the program CANNY-E,
considering 200 combinations of the 9 random variables. LIM1, LIM2 and LIM3 are the corre-
sponding deformation limits, respectively, 0.05 m, 0.067% (1/1500), and 0.080% (1/1250). Table 4
212 R.O. Foschi et al. / Structural Safety 24 (2002) 205–218
Table 4
Results, example 2, forward problem
shows the calculated reliability indices. In a performance-based design problem, for the same defor-
mation limits, the design parameters were chosen to be the means of the three column depths, H1, H2
and H3, allowing coefficient of variations (COV) 0.01. The target reliability levels for each limit state
were, respectively, 1=2.5, 2=2.0 and 3=2.0. The results are shown in Table 5.
For the third example we consider an earthquake engineering problem. The inverted pendulum
shown in Fig. 2 is a wood column carrying an overhanging mass M and connected at its base by
four dowels and two steel side plates fixed to the ground. This moves with an acceleration aG(t).
The drift, or horizontal displacement at the top of the column, is . max corresponds to the
maximum during the earthquake. The column is assumed to be sufficiently rigid, such that most
of the displacement is due to the deformation of the connectors. A nonlinear dynamic analysis
can be used to calculate the time history (t), taking into account the hysteretic behavior of the
dowels, with a typical result shown in Fig. 3 for the Landers, California, earthquake of 1992
(Joshua Tree Station).
The nonlinear analysis incorporates, at each time step, a calculation of the hysteresis loop for
each of the dowels, taking into account their own elasto-plastic response and the nonlinear
behavior of the medium around them [10]. The performance criteria are associated with different
damage levels, and these are expressed in terms of the maximum horizontal drift max and limits
H/K, with K being a constant defining the drift limit and H the column height. For each limit
state, the performance function G(X) can then be written as
max is a function of X, the vector of random variables associated with the problem. Some of
these are linked to the structural properties and others to the uncertainty in the earthquake itself.
Table 5
Results, example 2, inverse problem
Limit state Deformation limit Target reliability Achieved reliability Design parameters
index index
Although the formulation of the problem is simple, the evaluation of the G function for each
vector X is time-consuming and generally makes standard simulation procedures quite inefficient.
On the other hand, the shape of G may show peaks and valleys in coincidence with frequencies
near resonant conditions, making the gradient-dependent FORM algorithms susceptible to slow
or difficult convergence. Even if convergence is achieved, the nonlinearity of G may make the
usual approximation of failure probability, based on the reliability index, quite inaccurate. A
response surface replacement for the actual G function may be used, but it is difficult to find a
simple surface which will represent with confidence all the nonlinearities and oscillations in G,
particularly over the entire domain of interest. On the other hand, the approach presented in this
work can be applied with good results.
214 R.O. Foschi et al. / Structural Safety 24 (2002) 205–218
Five random variables were selected and their statistics are given in Table 6. One of the vari-
ables was the peak ground acceleration (PGA) during the event, and its cumulative distribution
was consistent with a design value of 0.25 g (475 year-return), a COV=0.6 and a mean earth-
quake arrival rate =0.1 events/year. The peak acceleration was expressed as a random factor f
times the peak of the historical record. The mass M had a mean value of 800 kg, and a
COV=0.10. The dowel diameter D and the arrangement radius R were deterministic, but con-
sidered random with a very small coefficient of variation. The yield point for the dowels, Y, was
assumed to have a COV=0.10. A database for max was generated by sampling the five vari-
ables, augmenting it with zero responses for zero peak ground acceleration at all values of the
remaining four variables.
Different evaluation procedures were considered: (1) standard Monte Carlo simulation with a
sample size 100,000; (2) FORM using the simple quadratic response surface anchored at point P;
(3) the approach discussed in this work, based on importance sampling simulation, sample size
1000, anchored at the FORM design point Q. The results are shown in Table 7 in terms of
reliability indices , for each performance level and type of analysis. Table 7 also shows the
design point Q. All simulations used the localized interpolation for the evaluation of G(X).
Table 7 shows that the scheme based on FORM+Importance Sampling around Q is in close
agreement with the benchmark Montecarlo. FORM alone, used on the quadratic response sur-
face from Eq. (5), does not generally produce acceptable answers, in particular for the last three
performance levels with higher exceedence probabilities.
Table 6
Example 3, variable statistics
Table 7
Results, example 3, forward problem
For an inverse application, the vector {d} of design parameters was defined as
where the components of {d} are the mean values of the variables R, D and Y.
Three performance criteria were considered
G3 ¼ Fmax F ð16Þ
Consider a wood frame shear wall as typically built for residential construction in North Amer-
ica, Fig. 4. It consists of a frame with panel covers attached by means of nails or screws to the frame
members. Under earthquake excitation, the response of the structure will be mostly controlled by
the following parameters: (1) the nail spacing e1 around the perimeter of the cover panels; (2) the
nail spacing e2 in the interior of the panels; (3) the hysteretic behavior of the nails used; (4) the mass
M carried by the wall; and (5) the characteristics of the earthquake: the peak ground acceleration
and the frequency content of the accelerogram. The performance requirement may be again spe-
cified in terms of damage, expressed as maximum wall drift or horizontal displacement of the
mass M. Performance functions can then be written as
in which
H =wall height
K =constant defining the tolerable drift (e.g., H/200)
Table 8
Results, example 3, inverse problem
Dowel diameter D (mm) Dowel yield stress Y (MPa) Arrangement radius R (mm)
The variables affecting the actual response are considered random. Certainly, the variables
associated with the earthquake are very uncertain. In performance-based design, some of the
characteristics of these variables can be used as design parameters. For example, for a given mass
and earthquake statistics and allowing for construction inaccuracies, the mean of the nail spacing
e1 and e2 can be calculated so that specific performance criteria are met with pre-set target
reliability levels.
A nonlinear dynamic analysis of the wall was performed for different combinations of the
intervening variables, in order to create a database for the response . Such a dynamic analysis,
incorporating the calculation of the hysteresis loop for each nail at each time step, has been
implemented at the University of British Columbia in the software package DAP3D, capable of
performing the analysis of arbitrary three-dimensional wood framed structures. This example
shows the results for a standard shear wall, 2.44 m tall, with 12 mm Oriented Strand Board cover
panels and a frame with vertical members with a spacing of 400 mm. The covers are attached with
common 50 mm long nails. For this example, the earthquake record is also that of the Landers,
Joshua Tree Station, CA event of 1992. The earthquake data used in this design example are
consistent with a site design acceleration of 0.25g (for a 475 year return period), a mean return
rate of earthquakes of 0.1 (on average, one every ten years), resulting in a mean ground accel-
eration of 0.927 m/s2 (around 0.1 g) with a standard deviation of 0.556 m/s2. The applied mass M
is, on average, 6.0 kN.s2/m, with a standard deviation of 0.6 kN.s2/m.
R.O. Foschi et al. / Structural Safety 24 (2002) 205–218 217
The performance criteria are specified as follows: (1) ‘‘No damage’’ or a maximum wall drift
<H/300 with exceedence probability of 5% or reliability index =1.645 and (2) ‘‘Tolerable
damage’’ or a maximum wall drift <H/200, with exceedence probability of 1% or reliability
index =2.330. The inverse procedure presented in this work was applied with the local inter-
polation of the database, with resulting optimum mean nail spacing of 29.5 mm along the peri-
meters of the covers and 185.4 mm for the interior nailing, allowing a 10% coefficient of
variation.
4. Conclusions
References
[1] Atkenson CG, Schaal SA, Moore AW. Locally weighted learning. AI Review 1997;11:11–73.
[2] Bucher CG, Bourgund U. A fast and efficient response surface approach for structural reliability problems.
Structural Safety 1990;7:57–66.
[3] Foschi RO. Modeling the hysteretic response of mechanical connections for wood structures. Proceedings, World
Timber Engineering Conference, Whistler, BC, Canada, July 2000.
[4] Goldberg, DB. Genetic algorithms in search, optimization and machine learning. Addison-Wesley; 1989.
[5] Haykin, S. Neural networks: a comprehensive foundation. Prentice Hall; 1999.
[6] Li H, Foschi RO. An inverse reliability method and its applications. Structural Safety 1998;20:257–70.
[7] Schüeller GI, Stix R. A critical appraisal of methods to determine failure probabilities. J Structural Safety 1987;4:
293–309.
[8] Moore AW, Schneider J, Deng K. Efficient locally weighted polynomial regression predictions. Proc. 1997 Inter-
national Machine Learning Conference, Morgan Kauffmann Publishers; 1997.
[9] Murotsu Y, Shao S, Chiku N, Fujita K, Shinohara Y. Studies on assessment of structural reliability by response
surface method and neural network. In: Thoft-Christensen P, Ishikawa H, editors. Reliability and Optimization of
Structural Systems. Elsevier; 1993. p. 173–80.
[10] Patternson DW. Artificial neural networks: theory and application. Prentice Hall; 1996.