You are on page 1of 222

UNCERTAINTY QUANTIFICATION AND

OPTIMIZATION OF STRUCTURAL RESPONSE
USING EVIDENCE THEORY

A dissertation submitted in partial fulfillment of the
requirement for the degree of
Doctor of Philosophy

By
HA-ROK BAE
B.S., Ajou University, South Korea, 1999
M.S., Ajou University, South Korea, 2001

_____________________________________

2004
Wright State University

WRIGHT STATE UNIVERSITY
SCHOOL OF GRADUATE STUDIES
November 20, 2004
I HEREBY RECOMMEND THAT THE DISSERTATION PREPARED
UNDER MY SUPERVISION BY Ha-Rok Bae ENTITLED Uncertainty Quantification
and Optimization of Structural Response Using Evidence Theory BE ACCEPTED IN
PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF
Doctor of Philosophy
______________________________
Ramana V. Grandhi, Ph.D.
Dissertation Director
Director, Engineering Ph.D. Program
______________________________
Robert A. Canfield, Ph.D.
Co-Director

Committee on Final Examination
______________________________
Ramana V. Grandhi, Ph.D., WSU
______________________________
Richard J. Bethke, Ph.D., WSU
______________________________
Joseph C. Slater, Ph.D., PE, WSU
______________________________
Robert A. Canfield, Ph.D., AFIT
______________________________
Gary Kinzel, Ph.D., OSU

______________________________
Joseph F. Thomas, Jr, Ph.D.
Dean, School of Graduate Studies

ABSTRACT
Bae, Ha-Rok. Ph.D., Department of Mechanical and Materials Engineering, Wright
State University, 2004. Uncertainty Quantification and Optimization of Structural
Response Using Evidence Theory.
For the last two decades, non-deterministic analysis has been studied extensively to
enable analytical certification of an engineering structural component or an entire system
for their demanding performances. Probability theory with strong mathematical
formulations has gained its popularity in Uncertainty Quantification (UQ). However,
recently, many scientific and engineering communities have recognized that intrinsic
uncertainties in an engineering system have multifaceted nature (randomness, nonrandomness, partial randomness, vagueness, and so forth) and traditional probability
theory does not always provide an appropriate framework for describing the multiple
types of uncertainties, especially for a large-scale and complex engineering system. In
developing high-performance practical mechanical systems, it becomes obvious that our
knowledge and data suffer from sheer imprecision because we must explore beyond the
current level of technological knowledge and experience. The primary objective of this
research work is to develop an appropriate and unified UQ framework for multiple types
of uncertainty sources. One of the main challenges of UQ for practical implementation in
engineering designs is the computational cost, and that is the focus of this dissertation;
efficient computational algorithm development. Evidence theory is advanced for largescale aircraft structural design in multi-physics environment.

iii

TABLE OF CONTENTS

1. Introduction……………………..…………………………………………

1

2. Structural Reliability Analysis…………………………………….……..

12

2.1 Limit State Function…………………………………………………..

13

2.1 Probabilistic Approaches……………………………………………..

14

2.1 Non-probabilistic Approaches………..………………………………

20

3. Evidence Theory………………………………………………………….

25

3.1 Set Operations and Mappings…………………………………………

25

3.2 Frame of Discernment………………………………………………..

28

3.3 Basic Belief Assignment………………………………………. …….

31

3.4 Combination of Evidence…………………………………………….

40

3.5 Belief and Plausibility Functions.……………………………..………

46

4. Structural Uncertainty Quantification Using Evidence Theory……….

51

4.1 Problem Definition.…………………………………………………..

51

4.2 BBA Structure in Engineering Applications….………………………

52

4.3 Evaluation of Belief and Plausibility Functions..…..……….………..

54

4.4 Numerical Example……………………………………..…….………

58

5. System Reanalysis Methods for Reliability Analysis………………..….

65

5.1 Surrogate-Based Reanalysis Techniques…...………………….……..

68

5.2 Coefficient Matrix-Based Reanalysis Techniques…..………………..

78

5.3 Combined Iterative Technique….………………………….…..…….

100

6. Cost-Efficient Evidence Theory Algorithm….………….………………

119

6.1 Multi-Point Approximation………………....………………….…….

121

iv

.2 Sensitivity Analysis Using Evidence Theory.4 Numerical Example……………………………………………. Comparison of Reliability Approaches with Imprecise Information…. References…………………………………………………..2 Case Study I: Three Bar Truss…………………………………….1 Problem Definition with Imprecise Information………………. 194 10. 158 8. 201 v .2 Cost Efficient Algorithm for Structural Uncertainty Quantification… 122 6. 163 8.…………………….…….…………….3 Case Study II: Intermediate Complex Wing………………………….… 143 7.1 Plausibility Decision Function…………………………………. 140 7. 177 9.… 165 8. 127 7.……. Summary……………………………………………………. 173 8... 140 7.6.3 Numerical Examples………………………………………………….…….…………….3 Reliability-Based Design Optimization Using Evidence Theory……. Reliability Assessment Using Evidence Theory and Design Optimization 162 8.

2 Graphical Interpretation of the Reliability Index 18 Figure 2. m({x1.4 Probabilistic BBA Structure 39 Figure 3. m(X) 35 Figure 3.11 BBA Structure (m({x1})=0.1 Frame of Discernment with Elementary Intervals 29 Figure 3.1 Uncertainty Quantification Techniques 2 Figure 2. and Probability (Pf) in Elementary Propositions 50 Figure 4. UF and Joint BBA Structure for Two Uncertain Parameters vi 56 .8 Rules of Combination by Parameter k 45 Figure 3. m(Ω)=0.6 Consonant BBA Structure 40 Figure 3.5 Complementary BBA Structure 39 Figure 3.12 Belief (Bel).1) 48 Figure 3.10 Bel and Pl in a Given BBA Structure 47 Figure 3.1.3 The Relationship Between the Reliability Index and the Safe Probability 19 Figure 2.3.LIST OF FIGURES Figure 1.2 The Failure Set. x1 52 Figure 4.9 Belief (Bel) and Plausibility (Pl) 47 Figure 3.7 General BBA Structure 40 Figure 3.2 Constructing BBA Structure in Evidence Theory 33 Figure 3.3 Degree of Ignorance.1 14 Limit-State Surface Between Failure and Safe Domains Figure 2. m({x2})=0.4 Triangular Fuzzy Membership Function 22 Figure 3. x2})=0.1 Multiple Interval Information and BBA for an Uncertain Parameter.5. Plausibility (Pl).

14 Intermediate Complexity Wing Structure Model and Design Variables 114 Figure 5.6 Plane Truss Structure 94 Figure 5.7 Design Variables (βi) for Elements Under Uncertainty in the Elastic Modulus 97 Figure 5.3 Uncertainty Quantification Algorithm in Evidence Theory 57 Figure 4.11 Successive Predicting Process of the BSI Method 107 Figure 5.6 Load Factor Information 60 Figure 4.13 Iterative Result and the Improved Eigenvalue Distribution 112 Figure 5.8 Combined Information for Load Factor 62 Figure 4.12 BSI Method Flowchart 110 Figure 5.5 Relative Computational Cost Ratios of SMI to LU Decomposition 90 Figure 5.4 ICW Structure Model 58 Figure 4.4 Successive Matrix Inversion (SMI) Algorithm for m Columns Modification 88 Figure 5.8 Sequential Computation Procedure of the SMI Method in Monte Carlo Simulation for Two Probabilistic Variables (β1 and β2) 98 Figure 5.9 Combined Iterative (CI) Method 101 Figure 5.10 Separating [∆K] into the Parts for SMI and an Iterative Method 104 Figure 5.15 Iterative Solution History of CI Method vii 116 .1 Three-Bar Truss 73 Figure 5.9 Complementary Cumulative Plausibility and Belief Functions 64 Figure 5.Figure 4.3 Relative Error Plots of Various Approximation Methods 76 Figure 5.2 Two Design Points of the Three-Bar Truss 75 Figure 5.7 Combined Information for Elastic Modulus Factor 61 Figure 4.5 Elastic Modulus Factor Information 59 Figure 4.

and γ) from 137 the Second Expert Figure 7.1 Three Bar Truss 143 Figure 7.5 Scale Factors (α.6 Tip Displacement (δTip) of the Composite Cantilever Beam with 130 respect to the Scale Factors (α and β) and the Surrogate Failure Region Using the Proposed Method Figure 6.3 The Cost-Efficient Algorithm for Assessing Bel and Pl 126 Figure 6.4 Composite Cantilever Beam Structure Model 127 Figure 6.10 Aerodynamic Pressure (Cp_roll) Distributions from Steady Aeroelastic 136 Rolling Trim Analysis Figure 6. and γ) from the First Expert 136 Figure 6.16 Improved Eigenvalue Distribution During Reanalysis Using CI method 117 Figure 6.4 Consonant Intervals and an Approximate Membership Function for the Scale of Uncertain Parameter (P) Using the Inclusion Technique 148 Figure 7.5 System Response (Displacement) Membership Function for the Three 149 Bar Truss viii .3 Consonant Intervals and an Approximate Membership Function for the Scale of Uncertain Parameter (E) Using the Inclusion Technique 146 Figure 7.2 Deploying AEPs and Constructing the Surrogate on the Failure Region 124 Figure 6. β.1 Identifying the Failure Region Using an Optimization Technique 123 Figure 6. β) Information for EL and ET 129 Figure 6.2 Imprecise Information for the Scale Factors of Uncertain Parameters (E and P) 144 Figure 7.12 Interval Information for Uncertain Variables (α.8 Aerodynamic Model of ICW 133 Figure 6.11 Interval Information for Uncertain Variables (α. β.7 ICW Structure with Uncertainties in the Root Region 132 Figure 6.Figure 5.9 Aerodynamic Pressure (Cp_lift) Distributions from Steady Aeroelastic 134 Trim Analysis of Lift Forces Figure 6.

6 PDF of e (Scale of Elastic Eodulus) Using Uniform Distribution Assumption 149 Figure 7.Figure 7.8. and TH 3) 185 Figure 8.9 Discretized Normal PDF (N: the number of discretization) 157 Figure 7.5 Elastic Modulus Factor Information 178 Figure 8.11 Sensitivity of Plausibility with Thickness Factors (TH 1.7 Combined Information for Elastic Modulus Factor 180 Figure 8.11 Scale Factor Information for Static Force from Different Sources 159 Figure 7. TH 2. Combined Information for Load Factor 182 Figure 8.2 The Network of Local Approximations 170 Figure 8.4 ICW for RBDO 177 Figure 8.12 Discretized Intervals for Elastic Modulus with Given Interval Statistics 160 Figure 8. f -1(Uy)∩ck. Pl.6 Load Factor Information 179 Figure 8.9 Proposition’s Sensitivities of Plausibility of Elastic Modulus Factor 183 Figure 8.10 The Convergence of Bel.3 Linear Response Surface Models (LRSMs) for Sensitivity Analysis 171 Figure 8.10 Proposition’s Sensitivities of Plausibility of Load Factor 184 Figure 8. in a Joint Proposition ck 164 Figure 8.8 Complementary Cumulative Measurements of Possibility Theory. and Probability Regarding the Number of Discretization 158 Figure 7.12 The Optimization History of Objective Function and Design Variables 189 Figure 8.7 PDF of p (Scale of Force) Using Uniform Distribution Assumption 149 Figure 7.13 Trust Region Uncertainty Quantification for Sequential Optimization 192 Under Multiple Types of Uncertain Variables ix . Probability Theory and Evidence Theory for Three Bar Truss Example 152 Figure 7.1 The Failure Region.

2 ICW Results Using the Vertex and Proposed Methods 161 Table 8. Plausibility Decision and Plausibility 187 x .LIST OF TABLES Table 3.1 Comparison of Results and Costs for Three Bar Truss Example 152 Table 7.2 ICW Results Using the Sampling. Vertex and Proposed Methods 138 Table 7.1 The Evidence for True Value of x 48 Table 4.2 Failure Degrees of Belief.1 Tip Wing Skin Thickness Factor (t1) 61 Table 4.1 Composite Cantilever Beam Results Using the Vertex and Proposed Methods 131 Table 6.2 Root Wing Skin Thickness Factor (t2) 61 Table 6.1 Intermediate Complexity Wing Results 181 Table 8.

I am also grateful to Professor Youngsuk Shin of Ajou University. I wish to express my sincere thanks to Professor Robert A. During the first year of my doctoral studies. I am greatly indebted to my lovely wife.ACKNOWLEDGEMENTS I am deeply grateful to my advisor.S. committee members and my colleagues at the Computational Design Optimization Center at Wright State University. Dr. Professor Ramana V. Sun-Jung Bae and Hee-Jung Bae. Grandhi. His continuous suggestions and encouragement sustained me throughout research and writing of this dissertation. for their continuous support that enabled me to complete this work. xi . Ravi Penmetsa and Mr. research work. I would like to give my special thanks to my parents and sisters.D. his respect for and belief in my limited knowledge made me responsible and passionate about my study and research. fellowship granted by the Dayton Area Graduate Studies Institute (DAGSI). Canfield of the Air Force Institute of Technology. and Brandy Foster.D. for their patient love and trust in me. Many discussions with him and his excellent advices were greatly helpful to my studies.D. I would like to acknowledge the support from the Air Force Office of Scientific Research (AFOSR) under grant F49620-00-1-0377 and from the Ph. whose corrections and suggestions on English style and grammar is really appreciated. for his academic guidance and individual attention. Min-Suk Chun. Ed Alyanak. Especially. I would also like to extend my thanks to the Ph. and my little daughter. whose friendship and encouragement were other benefits of my Ph. for their valuable suggestions and comments. who introduced me to the field of structural optimization and gave me the opportunity to continue my studies in the U.A. Jae-Hee Bae.

xii .To my father and mother.

Some uncertainties in those systems which occur with the nature of randomness can be modeled with well-known probabilistic functions such as a Probability Density Function (PDF). such as our knowledge. experimental budget. the uncertain parameters may take values within certain bounds instead of explicit PDFs. it is imperative to take various types of uncertainties that cannot be just addressed by a probabilistic framework into consideration. and timeframe may often be very limited and never enough. stochastic analysis techniques. which are based on probability theory. the other parameters may not be assigned with any random function of probability theory due to lack of sufficient information and data. 1 . and. it becomes obvious that available resources. When conceiving innovative mechanical systems. have been widely used in engineering systems to model and propagate uncertainties. Introduction In addition to deterministic analysis. In that case. However. as mechanical systems and multidisciplinary performance requirements become complex and stringent.1. Probability theory has obtained its popularity in many research areas. However. non-deterministic analysis has been adopted in the last several decades for Uncertainty Quantification (UQ) in many structural systems.

to address these limitations of the traditional UQ framework and to enable the certification of the systems’ performance. Consequently. strong assumptions are usually made to furnish the complete randomness to the imprecise and bounded uncertain parameters. Parameter Physical System Modeling Material properties Loads Geometries Sufficient data? Scenario Abstraction Failure modes Initial conditions Model form No Aleatory (Random) Epistemic (Subjective) Uncertainty Uncertainty Probability Theory Possibility Theory Evidence Theory Figure 1. In this work. The strong assumptions include approximating or assuming a PDF in given bounds without any sufficient supporting evidence. the result of reliability analysis using the probabilistic framework might be the mere reflection of the reinforced assumption. alternative UQ techniques are explored for a reliable structural design.In a probabilistic UQ framework.1 Uncertainty Quantification Techniques 2 .

Model form and scenario abstraction uncertainties. The common issue among these theories is how to determine the degree to which uncertain events are likely to occur. Both classical probability theory and evidence theory limit the total belief for all possible events to be unity. Probability theory. possibility theory. are included in epistemic uncertainties. various UQ techniques can be applied for appropriate propagation and quantification of uncertainty as shown in Fig.Depending on the nature of uncertainty in a system. has been developed mostly 3 . as a popular approach in uncertainty quantification in engineering structural problems for the last several decades. A distinct difference among these theories is in assignment of degree of belief [5. Uncertainties can be classified into two distinct types in the risk assessment community: aleatory and epistemic uncertainty [1-4]. and so on.1. but they should be treated as epistemic uncertainties when data is insufficient to construct a complete and smooth PDF. evidence theory. Epistemic uncertainty is subjective or reducible uncertainty that stems from lack of knowledge and data. unexpected failure modes. which usually come from boundary conditions. different choices of solution approaches. there is no such restriction in possibility theory. Parameter uncertainties with variability are basically aleatory uncertainties. 6]. and so forth. Aleatory uncertainty is also called irreducible or inherent uncertainty. since one may have perfect confidence for a certain event and may give a possibility of one through a possibility distribution. 1. On the other hand. Formal theories introduced to handle those uncertainties are classical probability theory.

The model is simulated with these random values to evaluate a certain performance probability. It generates random values of aleatory uncertain variables in a target system from given PDFs. However. Otto and Antonsson [14]. was first introduced by Zadeh in 1965 [13]. aleatory uncertainty is well represented by a probabilistic function. Structural design problems with fuzzy parameters were investigated by researchers such as Wood. With complete and sufficient information. there may exist some epistemic uncertainties to which probability theory is not appropriate. Besides MCS. there are several well-developed methods for reliability analysis using probability theory: FirstOrder Reliability Method (FORM) [8]. Classically. The most familiar technique is Monte Carlo Simulation (MCS) [7]. a set of an uncertain variable is defined by its members. Second-Order Reliability Method (SORM) [9-11]. Usually. An event may be either a member or a non-member of the set based on different degrees of membership or α cuts. Stochastic Finite Element Method (SFEM) [12]. 4 .for aleatory uncertainty. such as a PDF. Fuzzy set theory. and so on. the membership of a fuzzy variable is given by a continuous mathematical function which can be viewed as a PDF of probability theory. Many researchers prefer the possibility theory to probability theory to model these epistemic uncertainties in a system. because epistemic uncertainties often cannot be assigned to every single event in a way that satisfies the axioms of probability theory. Fuzzy set theory was intended to deal with problems involving vagueness and imprecision in real-life problems. which is also called possibility theory. and Penmetsa and Grandhi [15].

the information for other issues. Shafer [16] developed Dempster’s work and presented evidence theory. However. Hence. However. For an alternative UQ technique. but also aleatory uncertainty in its framework. in many practical engineering cases. For instance. The framework of evidence theory allows for pre-existing probability information to be treated together with epistemic information (such as a membership function. both aleatory and epistemic uncertainties may be present simultaneously. Until now. there may be partial evidence of an uncertain variable to which either the probabilistic or possibilistic framework is not appropriate. sufficient data for the dimensions and material properties of some parts of the structure may exist with probability distributions. interval information. in an aircraft design. also called Dempster-Shafer Theory. control surface settings. and so forth. UQ analyses have been performed by treating them separately or by making assumptions to accommodate either the probabilistic framework or the fuzzy set framework.In a real engineering structural system. might not be expressed by either a membership function or interval information. operating conditions. such as gust loads. evidence theory can handle not only epistemic uncertainty. such as radio 5 . Only certain intervals can be given for an uncertain parameter. Evidence theory is a generalization of classical probability and possibility theories from the perspective of bodies of evidence and their measures. most applications of evidence theory have been for system maintenance or artificial intelligence. Moreover. and so on) to assess likelihood for a limit-state function of interest. when multiple types of uncertainties coexist in a target structural reliability analysis. even though the methodologies for manipulation of evidence are totally different.

Unlike the PDF or the fuzzy membership function. In evidence theory. we attempt to apply evidence theory to practical engineering systems with implicit analysis techniques. the resulting uncertainty in a system is usually quantified by many repetitive system simulations for all the possible propositions given by BBA structures of uncertain variables.communication systems. system management in nuclear industry. structural systems are usually numerically simulated with intensive computer codes. image processing. the BBA structure in evidence theory usually cannot be expressed by a continuous explicit function with the given imprecise information. Oberkampf and Helton [20] initially demonstrated evidence theory by quantifying uncertainty for a problem involving closed-form equations of mechanical problems. and decision making in design optimization problems [17-19]. a Basic Belief Assignment (BBA) structure. but based on distinct bodies of evidence. in modern structural designs. such as Finite Element Analysis (FEA). The BBA structures can be given from several independent knowledge sources over the same frame of discernment. Computational Fluid Dynamics 6 . Because of the discontinuity in BBA. Dempster introduced Dempster’s rule of combination [16] that enables us to compute orthogonal sum of given belief structures from multiple sources to fuse given interval data from different independent sources. is constructed with imprecise and insufficient information. In evidence theory. The information or hypotheses for an uncertain variable are given with flexible multiple intervals that might overlap one another. In this work. However. which is similar to a PDF in probability theory.

(CFD), and so on. Hence the computational cost of UQ analysis can be very high and
prohibitive for most engineering structural systems.

To alleviate the intensive computational cost, which is one of the major
difficulties in applying evidence theory to engineering structures, a robust and efficient
technique is developed. In many engineering structural UQ analyses, the failure region is
comparatively smaller than the entire function space of interest, and a large amount of
computational resources is wasted on the non-contributive region to the UQ result.
Therefore, in the proposed cost-efficient UQ algorithm, the computational resources are
focused only on the failure region to reduce the overall computational cost with a robust
surrogate model approach.

First, the proposed algorithm identifies the failure region in a defined UQ space
by employing a mathematical optimization technique, and then an approximation
approach is adopted to construct the surrogate of an original limit-state function for the
repetitive simulations of UQ analysis. In this work, for the robust surrogate model, MultiPoint Approximation (MPA) method [21] is employed. MPA is a network of multiple
local approximations that are combined with a weighting function to determine the
contribution of each local approximation. The accuracy of MPA mainly depends on that
of the local approximation, hence the choice of local approximation is important. The
Two-Point Adaptive Non-linear Approximation (TANA2) method, developed by Wang
and Grandhi [22], is employed as a local approximation method. The efficiency and
accuracy of TANA2 were extensively demonstrated earlier in many engineering

7

disciplines[21-26]. TANA2 is very efficient when dealing with highly nonlinear implicit
problems with a large number of design variables. It was found that the belief and
plausibility functions were computed efficiently without sacrificing the accuracy of
resulting measurements by employing the proposed cost-efficient UQ algorithm.

In the effort of reducing the computational cost further, a new direct and exact
reanalysis technique, the Successive Matrix Inversion (SMI) method, is developed based
on the binomial series expansion of a structural stiffness matrix. The SMI method gives
exact solutions for any variations to an initial design of a Finite Element Analysis (FEA);
that is, there is no restriction on the valid bounds of the design modification in using SMI.
The SMI method includes the capability to update both the inverse of the modified
stiffness matrix and the modified response vector of a target structural system by
introducing an influence vector storage matrix and a vector-updating operator. Since the
cost of reanalysis using SMI is proportional to the ratio of the changed portion to the
initial stiffness matrix, the SMI method is especially effective for a regional modification
in a structural FEA model. As a complementary reanalysis technique of SMI, the
Binomial Series Iterative (BSI) method is also developed for global modifications with a
small degree of changes. By coupling the SMI method with an iterative method, a
Combined Iterative (CI) technique in which the weaknesses of a typical iterative method
are overcome by the direct method—the SMI method is introduced for the first time. The
CI method is a new class of linear system solvers. With the cost-efficient system
reanalysis techniques and UQ algorithm, the general UQ framework of evidence theory
can be successively applied to practical and large-scale engineering applications.

8

The strengths and weaknesses of evidence theory and improvements for solving
large-scale uncertainty quantification problems are also discussed and compared with
those of probability theory and possibility theory. Probability theory does not allow any
impreciseness on the given information, so it gives a single-valued result. On the other
hand, possibility theory and evidence theory give a bounded result. The result from
possibility theory gives the most conservative bound ([0, Necessity]), essentially because
of Zadeh’s extension principle [27]. In that principle, the degree of membership of the
system response corresponds to the degree of membership of the overall most preferred
set of fuzzy variables. Evidence theory gives an intermediate bounded result ([Belief,
Plausibility]), which always includes the probabilistic result; that is, lower and upper
bounds of probability based on the available information. It was found that a BBA
structure in evidence theory can be used to model both probability and possibility
distribution functions due to its flexibility. This explains why different types of
information (fuzzy membership function and PDF) can be incorporated into one
framework of evidence theory to quantify uncertainty in a system. The bounded result of
evidence theory can be viewed as the best estimate of system uncertainty because the
given imprecise information is propagated through the given limit-state function without
any unnecessary assumptions.

Sensitivity information for the quantified uncertainties can be very useful in the
design phase of an engineering structural system. With the sensitivity analysis, we can
determine the primary contributor to the uncertainties in a designed structural system, and

9

sensitivity analysis also makes it possible to improve the structural design by decreasing
the uncertainties in the system. In finding sensitivity of plausibility with respect to an
expert opinion, it is the goal in this work to find the primary contributing expert opinion.
The result from sensitivity analysis indicates on which proposition the computational
effort and future collection of information should be focused. This sensitivity analysis
can be easily shifted from the sensitivity for plausibility to the sensitivity for ignorance,
which is defined by the subtraction of belief from plausibility. By decreasing the degree
of ignorance, we can be more confident in the reliability analysis result. The sensitivity
of a deterministic parameter in an engineering structural system is also developed to
improve the current design by decreasing the failure plausibility of a limit-state function
efficiently. However, the plausibility function in evidence theory is a discontinuous
function for varying values of a deterministic parameter, because of the discontinuity of a
BBA structure for uncertain parameters. The gradient of plausibility is represented using
the degree of plausibility decision (Pl_dec), which was introduced by applying the
generalized insufficient reason principle [73] to the plausibility function. Pl_dec can be
used as a supplemental measurement to make a decision as to whether a system can be
accepted.

For optimization of structural system based on performance reliabilities, UQ
analysis is incorporated into mathematical optimization techniques. The performance
reliability is obtained with not only perfect and complete data, but also imprecise and
insufficient information using the framework of evidence theory. By virtue of the
developed cost-efficient UQ algorithm with innovative system reanalysis techniques, an

10

11 .intrinsic discontinuous and repetitive design optimization procedure with performance reliabilities is tackled successfully in this work. The development is demonstrated using several structural models including a space truss structure. a composite cantilever beam. and an intermediate complexity wing representing a fighter aircraft.

Structural Reliability Analysis The uncertainty in structural systems has been recognized by many researchers. we can obtain better understanding of real structural behaviors and the reliability of the designed structural system. the popular probabilistic approach and the non probabilistic fuzzy approach are introduced after a brief description of a limit-state function in UQ.2. 12 . Even with advanced computing techniques. the challenges in structural reliability analysis are to achieve an accurate and fast reliability method for the calculation and prediction of the propagated uncertainty with multidisciplinary analyses and how to obtain the optimum design efficiently under uncertainty. In this chapter. by performing Uncertainty Quantification (UQ). and it has been admitted by many engineering societies that the performance of a system is non-deterministic and should be addressed by reliability analysis. Generally. Reliability is the belief measurement of a system performing its designed function over a specific period of time and under specified service conditions.

1 Limit State Function In the context of UQ in engineering systems. xi∈ Failure Surface (2. damage. The limit-states in most engineering structures can be classified into ultimate. Compared to the ultimate and damage limit-states.3) The definition of limit-state function is not unique. and serviceability limit-states.1. xi∈ Failure Region (2. excessive vibration. xi∈ Safe Region (2.1. 13 . the design space of the structure is separated into ‘failure’ and ‘safe’ regions such that: g ( x) > 0 .1) g ( x) = 0 . With a specific limit-state value on a desired performance measure.2.4) where the Allowable Function defines the acceptable level of the response and the Response Function is the structural response obtained from an explicit or implicit functions of design parameters of interest. but the function is usually expressed as: g ( X ) = Allowable Function − Response Function (2.2) g ( x) < 0 .1. a limit-state function describes the state of a structure or a structural element. and so on can be less critical. the serviceability limit-states defining the state of serviceability by measuring excessive deflection.1.

in the cases in which we have only aleatory uncertain parameters with complete and sufficient information and data for their randomness. It is assumed that R and S are non-negative and independent random variables with each Probability Density Functions (PDFs). and g ( R. S is the stress resultant.2. probabilistic approaches will be appropriate for UQ.1 Limit-State Surface Between Failure and Safe Domains 14 . The simplest example of the limitstate function can be given as the following stress-strength problem: g ( R. f R ( R ). S ) is the limit-state function of the structural reliability. 1.2 Probabilistic Approaches As shown in Fig. S ) = R − S = 0 f S (S ) f R ( R ) f S ( S ) = const safe domain g >0 R f R (R ) Figure 2. S failure domain g<0 Limit-state function g ( R.2.1. and f S ( S ) . S ) = R − S (2.1) where R is the strength.

Ω f is the failure domain. For a sampling method. S ) = 0 .In Fig. Eq.1. 2.2. I [•] . 2.4) 0 if [•] is false Then. S ) is the joint PDF of R and S. The probabilistic techniques can be classified into sampling-based methods and analytical approximation methods. The failure probability is Pf = P[ g ( X ) ≤ 0] = (2. g ( R. S )dR dS (2.2) Ωf where f RS ( R. the failure domain and the safe domain are separated by the limit-state surface. can be defined as I [•] = 1 if [•] is true (2. The probability of failure with the simple limit-state function is computed as Pf = f RS ( R.3) can be written as 15 .3) f ( X ) dX X g ( X )≤0 A failure set indicator function. (2. A failure region is defined with a limit-state function g and a random variable vector X as g ( X ) < 0 .2. as shown in Fig.2. the Monte Carlo Simulation (MCS) method [7] is one of the most popular techniques.1.2.

the joint PDF. the failure probability can be estimated as 1 Pˆ f = N N I [ g ( X ) ≤ 0] (2.2.7) k =1 where Pˆ f represents the crude Monte Carlo estimator of failure probability.2.6) i =1 where n is the number of random variables. n f X ( X ) = ∏ f X i ( xi ) (2. Instead of the multidimensional integration of Eq.2.8) 1 = Var[ I [ g ( X ) ≤ 0]] N 16 .Pf = I ([ g ( X ) ≤ 0] f X ( X ) dX (2. is equal to the product of the marginals when all the random variables are mutually independent.2. (2. The variance of the sample mean is computed as Var[ Pˆ f ] = N k =1 [ I [ g ( X ) ≤ 0] − Pˆ f ] 2 (2. fx(X).2. µ Pf .5) In general.5) by picking N randomly distributed points.

To decrease the variance of sampling methods efficiently. the standard deviation is proportional to 1 / N . in the procedure of important sampling. µ xn }T as g~ ( X ) ≈ g ( µ ) + ( X − µ ) g ′( µ ) (2.2.9) The mean value ( µ g~ ) and the variance ( σ g~ ) of the approximate limit-state function g~ ( X ) are µ g~ ≈ E[ g ( µ )] = g ( µ ) (2. that is. and so on. the analytical approximation methods have been devised to alleviate the high computational cost by employing truncated series expansions of the original model.The variance is proportional to 1/N. µ = {µ x . On the other hand. there are several useful techniques including the Important Sampling technique [28].2.10) 17 . In the mean value FORM method. For large-scale high fidelity simulations. the Latin Hypercube Sampling technique [29]. µ x . it is well known that the sampling methods might not be efficient and practical for use. such as adaptive importance sampling [30] for handling complex and large-scale problems. the quality of result depends on that of the analytical approximation to the original probability density function of interest. There are advanced sampling techniques. 1 2 . One of the approximation methods is the First-Order Second Moment (FORM) method [8]. However. the limit-state function is expanded by the first-order Taylor series expansion at the mean of random variables.

σ g~ ≈ Var[ g ( µ )] + Var[( X − µ ) g ′( µ )] = n i =1 ∂g ∂xi 2 σ x2 (2.12) The reliability index can be interpreted as a shortest distance from the mean point to the limit-state surface.2 Graphical Interpretation of the Reliability Index Once the reliability index is obtained. 2.2.11) i X =µ The reliability (safety) index β is computed as β= µ g~ σ g~ (2.2. the safe probability can be easily computed as follows: 18 .2. as shown in Fig. Limit-state surface (g=0) Failure region (g<0) Safe region (g>0) µg 0 g β σg Figure 2.

2. Φ(β ) Limit-state surface (g=0) Failure region (g<0) Safe region (g>0) µL µ u β Figure 2. as shown in Fig. some variations of the approximation method were developed.2. the approximation method can give erroneous estimates for highly nonlinear cases. and so on. 19 . Advanced Mean Value methods (AMV) [31]. However.3. To increase the accuracy of approximate estimates.Psafe = ∞ uL 1 1 exp − u 2 du = Φ ( β ) 2 2π (2.3 Relationship Between the Reliability Index and the Safe Probability By linearizing the original limit-state function at the mean value point. u is the standard normalized variable and uL is the lower limit of u for a limit state function. the original complex problem is changed into a simple problem. such as Second-Order Reliability Methods (SORM) [9-11]. due to the linearization of the given limit-state function.13) where Φ is a Cumulative Density Function (CDF).

In this section. rigorous.3 Non-probabilistic Approaches The framework of probability theory for UQ is mathematically precise. The alternative frameworks include fuzzy set theory [13]. crisp. A sharp. 33]. That is. interval theory [32. evidence theory [16]. Basically. and so on. The prediction does not mean that it has 80% membership in the set. “The probability for an individual to be a member of a set is 80%. in the classical set theory. and it also has 20% non-membership of the same set.” is right. On the contrary. it can be said. an individual or element set is either a member or a non-member of a specified set. and straightforward. “it is a member of the set. it is not allowed that an individual is partially in a set and also partially not in the same set at the same time. To address the imprecision and incompleteness in reality. the fuzzy set theory is introduced briefly as a popular nonprobabilistic tool for UQ. In probability and statistics.2. there is 80% chance that the prediction. classical set theory is reviewed and several alternative frameworks proposed in the middle of the last century are reviewed. However.” The final outcome is still either “it is” or “it is not” a member of the set. and unambiguous distinction (Boolean phenomenon) is the basic concept between a member and a non-member for a welldefined set in the classical set theory. the probabilistic approach might not be so effective because the description tools in the classical probability theory are not sufficiently expressive to characterize the propagating uncertainty with imprecise and incomplete information. In the classical set theory. in complex and large-scale systems. the degree of membership is modeled by fuzzy set theory to express the partial membership to a 20 .

The subset of the fuzzy set is determined by the membership function with respect to the specified level of membership.4. With the interval of a fuzzy variable. At all levels of the membership from zero (non-membership) to one (fullmembership). For example. Generally. the membership function can be also described as a continuous function. X. The fundamental mathematical difference between the fuzzy set theory and the classical probability theory is in the ways of assigning the mass of belief to a set. a fuzzy variable is defined as a continuous variable and the set of interest is expressed with an interval. a membership function is associated with the referential fuzzy set of a variable. different intervals of confidence can be considered with the given membership function. In fuzzy set theory. The classical probability theory assigns its basic mass of belief to each element or individual set whereas the fuzzy set theory allocates to consonant subsets of a set. the α1-level of the fuzzy variable x is defined as Xα1 in the fuzzy set. a subset Xα denotes the α-cut of the fuzzy set X at a specified α-level of the given membership function µ X (x) as follows: X α = {x ∈ X | µ X ( x) ≥ α } (2.specified set.1) 21 . In most engineering problems. 2. with the given fuzzy membership function shown in Fig.3. The referential fuzzy set could be viewed as a finite sample space in the probability theory.

3. µ X ( x 2 )} (2. x 2 ∈ X . there are two basic properties: normality and convexity. λ ∈ [0.4) 22 .3) µ X [λ x1 + (1 − λ ) x 2 ] ≥ min{µ X ( x1 ).3. 1] (2. or integrable.α α1 α2 x Xα1 Xα2 Figure 2.2) x∈R Convexity: A fuzzy set is convex with a membership function µ X (x) and X⊂R if ∀x1 . Normality: A fuzzy set is said to be a normal fuzzy set if and only if max µ X ( x) = 1 (2.3.4 Triangular Fuzzy Membership Function Although a membership function does not have to be continuous.

Given µ X (x) and µ Y ( y ) . neither probability theory nor fuzzy set theory always provide an appropriate framework for handling multiple types of uncertainties due to the fundamental theoretical incompatibility to develop an appropriate and unified framework for multiple types of uncertainty sources. many scientific and engineering communities have admitted that both aleatory and epistemic uncertainties coexist in a practical engineering system.6).3.3. µ Y ( y )}} (2. let X and Y be two fuzzy sets with Z⊆R. and consider a two-variable function: F : X ×Y → Z (2.5) Let µ X (x) . (2. define µ Z (z ) = max {min{µ X ( x).3. However. the corresponding fuzzy responses are computed by the Zadeh’s extension principle [27]. and µ Z (z ) be their associate membership functions. Recently. For instance. 23 .When multiple fuzzy variables are considered in a functional relationship.6) z = F ( x. µ Y ( y ) . y ) The fuzzy membership function for the implicit response of an engineering application is usually obtained by using interval analysis techniques [34] at each α level with Eq.

24 . etc) to assess likelihood for a limit-state function. evidence theory can accept aleatory uncertainty information (pre-existing probability information) as well as any epistemic information (certain bounds or possibilistic membership functions. UQ analyses have been performed by treating them separately or by making strong assumptions to accommodate either the probabilistic framework or the fuzzy set framework. However. because the frameworks are not compatible with each other due to fundamental theoretical differences. evidence theory is investigated and the unified framework for multiple types of uncertainties are developed in the following chapters. As a generalization of classical probability and possibility theories from the perspective of bodies of evidence and their measures. when multiple types of uncertainties coexist in a structural reliability analysis.Until now. due to the flexibility of the basic axioms.

….1 Set Operations and Mappings A set consists of a finite or infinite number of elements. First of all. Dempster. we can denote a set by listing its elements within braces. For example. Hence. Evidence theory allows us to express not only aleatory uncertainty. if a1. Evidence Theory Evidence theory [16]. some basic notions and notations in set theory are introduced first. then we write 25 . was originated by Arthur P.3. Epistemic uncertainty can be defined as a lack of knowledge or data in any phase or activity of the modeling process. and an are the elements of a set A. but also epistemic uncertainty. also known as Dempster-Shafer Theory. 3. The derivation of evidence theory is based on set theory because possible propositions of interest can be expressed as subsets of the set of all possible events. Aleatory uncertainty is irreducible and related to natural variability. There are several ways to denote a set for each particular circumstance. a2. Set theory provides useful tools to handle subset and superset relationships in an explicit and consistent manner. It was further developed by Glenn Shafer.

In the case in which one of these symbols is negated. we can use a certain condition expression for a set. an } (3. We can say A = B if and only if A ⊆ B and B ⊆ A . First. The notation A ∩ B is used to denote the intersection of A and B. (3. a 2 . A ⊄ B . is denoted as 26 . There are some set operators to make a new set with available sets.2) The most basic and well-known symbols used in set theory are ∈. ⊆.1. we put a slash through it: a ∉ A . The intersection indicates the set of all elements that are in both sets as A ∩ B = {x | x ∈ A and x ∈ B} (3. as in Eq.1. ⊂. . {x | the condition for x} (3.1.3) The notation A ∪ B denotes the union of the two sets as A ∪ B = {x | x ∈ A or x ∈ B} (3.1.1) Alternatively. we write x ∈ A to indicate that x is an element of A and x is said to be contained in A.1. A ⊆ B is to indicate that A is a subset of B and B is a superset of A.2). A and B. and =. and A ≠ B .A = {a1 .4) The difference of the two sets.

is indicated as A = Θ− A (3.A − B = {x | x ∈ A or x ∉ B} (3. and so on. 27 .5) The complementary set of A.1. g.1.9) For y ∈ B .6) The symbol ∅ is used to denote the empty set.1.7) A mapping from A into B. which is defined as a subset of a set Θ . and we can denote the function σ by f.1.1. ∅ = {} (3. we denote σ ( X ) = {σ ( x) | x ∈ X } (3. For X ⊆ A .8) A mapping σ from a set A into B is called a function on A. which assigns each element x ∈ A to an element σ ( x) ∈ B is denoted by σ :A→B (3.

1. If there exists a one to one mapping from A onto B.1.12) 3. The power set of A is denoted by 2 A = { X | X ⊆ A} (3. we can make a set A × B called the Cartesian product of A and B as A × B = {(a. the collection of all subsets of A and itself is called the power set of A. b) | a ∈ A. then we say that it is one-to-one corresponding mapping between A and B.1. b ∈ B} (3.2 Frame of Discernment Evidence theory starts by defining a frame of discernment that is a set of mutually exclusive “elementary” propositions.σ −1 ( y ) = {x | x ∈ A.11) For a given set A. σ ( x) = y} (3. Any problem of likelihood takes some possible set 28 . Given two sets A and B. And a mapping is to be from A onto B if for every b ∈ B there exists a ∈ A such that σ ( A) = B .10) A mapping is said to be “one-to-one” if the elements of A and B are distinctively mapped to each other.

1 Frame of Discernment with Elementary Intervals 29 . x2. x2. due to the lack of information or data. X={x1.1) where. In the case of a structural design problem. The given propositions might be nested in one another or they might partially overlap. Complex hierarchies of events can be imagined. As in Fig. x1. only interval information might be given with suspected elementary propositions.1.2. Other elementary propositions can be interpreted in a similar way. However. and x3 are elementary propositions. 3. the frame of discernment can be given as. 3. and x1 represents a proposition that a true load value is in interval [0. the finest subdivision of the set becomes the ‘elementary’ proposition.1. uncertainty can exist in structural parameters of the analysis model as an epistemic uncertainty. In this example. The frame of discernment may consist of all finite elementary propositions and may be viewed the same as a finite sample space in probability theory. 1]. shown in Fig. x3} (3. 1].as given. For operating load. elementary proposition x1 has interval [0. Frame of discernment is denoted by Ω or X. 0 1 x1 2 x2 3 x3 Figure 3. For instance. all the basic components of a system can be elementary components for determining the failed component of the system.

2) Proposition {x1.1. but we don’t know which one is true.Various propositions can be expressed for negation. 30 . elementary propositions should be defined to reflect all of the available evidence within the power set of X. the true value of load is assumed not to be located in both of the elementary propositions. {x3}. because we assume that the true value of load exists in the frame of discernment.2. X} (3. {x1. x2} in the set of 2X means that one and only one of the two propositions is true.12) represents all the possible distinct propositions. conjunction. The power set of X (Eq. Because elementary propositions are selected to be mutually exclusive to each other. The power set of X is given as 2X = {∅. {x1}. where n is the number of elementary propositions. x2 or x3. instead of our degree of belief in the proposition X. we may be able to say that proposition X represents our degree of complete uncertainty. Hence. proposition X. x3}. {x1. x2}. 3. 2n. {x2. However. The total number of the possible propositions is 2n. The proposition X of 2X set means that the true value of the load is located in interval of x1. and disjunction to elementary propositions. x3}. {x2}. this proposition X does not convey any useful information for us to quantify uncertainty with respect to defined elementary propositions in the parameter load. Hence. and it is always true. where n is the number of elementary propositions.

m(A) ≥0 for any A ∈ 2X (3.1) The number m(A) represents the portion of total belief assigned exactly to proposition A.4) A∈2 X We do not assign any degree of belief to the empty proposition ∅. we ignore the possibility of an uncertain parameter being located out of the frame of discernment in evidence theory. This measure m.2) II.3. 1]. BBA expresses our degree of belief in a proposition. must satisfy the following three axioms: I.3 Basic Belief Assignment In evidence theory. For example. 1] (3. the basic belief assignment function. m: 2X→[0. BBA is assigned by making use of a mapping function (m) in order to express our belief in a proposition with a number in the unit interval [0.3) m( A) = 1 III. (3. It is determined by various forms of information: sources.3. the basic propagation of information is through Basic Belief Assignment (BBA).3. and so forth. m(∅)=0 (3. quantity and quality of information. experimental methods.3. The total belief will be obtained by considering Belief and Plausibility functions that will be discussed later. Though these three axioms of evidence theory look 31 .3. that is.

y3} (3. x2. Evidence may not be available for all of the single.8) p(y1)+ p(y2)+ p(y3)=1 (3. x3}. the axioms for the BBA functions are less restrictive than those for probability measure. elementary propositions {x1}.3.2 (3. Ω={y1. and {x3}. the frame of discernment is initially defined in terms of elementary propositions with all available evidence.similar to those of probability theory.2 (3.6 (3. the probability mass function p is defined only for an elementary.9) On the other hand.3. in evidence theory. We can obtain a probability distribution like the following by a probability mass function p.7) p(y3)=0.3.5) where Ω is a sample space. p(y1)=0. 32 . there may exist evidence for proposition {x1. but.3.3. y2.6) p(y2)=0. For example. x2} that cannot be divided for two propositions {x1} and {x2}. The given evidence may not exactly correspond to a defined elementary proposition. {x2}. For instance. In probability theory. when a frame of discernment is given as X={x1. single proposition.

x3} × × {x1. x2} {x1. And. In Fig. by employing a baseless assumption. x2. with the BBA function. x2} is suitable for a proposition {x1. x2} which is already defined in a possible subset of X. x3} Figure 3. x2} are defined. x3 {x1} {∅ ∅} × {x1} {x2} {x1. 33 . this evidence can be used to assign the degree of belief (BBA) to the proposition {x1.2. {x1} and {x2}. Possible Events Available Evidence Evidence for {x1} X {x3} X (Frame of discernment) x1. in order to use probability theory. 2X. x2} Evidence for {x1. It is a more natural and intuitive way to express one’s degree of belief with partial information. m. the evidences for {x1} and {x1.In this case. given evidences might not be sufficient to assign BBAs to all of the set 2X. individually. And moreover. in evidence theory. without being split in two propositions. such as a uniform distribution function without reasonable information. BBA can be given to any possible subset of X. x2} directly. x2} are available so that only BBAs for {x1} and {x1. x2} should be distributed to its subsets. The evidence for event {x1. the evidence for proposition {x1. 3.2 Constructing BBA Structure in Evidence Theory However. x2} {x2. propositions {x1} and {x2}.

25 (3. is available. And. and it is assumed that E1 evidence can be used only to define BBA for proposition {x1}. the BBA structure can be given like this m({x1})=0. we cannot give the rest of 0. assume that if the E2 evidence supports just the proposition {x1}. Hence. m({x1}) is obtained from E1 evidence. that is. then m({x1}) will be 1. In other words. let’s assume that there is another source E2 for {x1}. {x2}. However. it can be said that if evidence. if the E2 evidence supports the proposition {x2}.25.75. {x1. x3}) based on the evidence for {x1}. in evidence theory. {x3}. its information is not transmitted to the rest of the propositions of X as evidence to determine the BBA of propositions x2 or x3.All the possible set with defined elementary propositions are 2X: {{x1}. x2}.10) For example.3.3.75 degree of belief that x1 is true.25 to m({x2}). m({x1}) from E1 evidence is interpreted such that we are certain with a 0. we are in total 34 . m({x3}).3. 3. in another case. then m({x1}) will be just 0. As shown in Fig. x3}. {x2. which exactly corresponds to proposition {x1}. E1 evidence does not imply that m({x2})+m({x3}) or m({x2.75 and m({x2}) will be 0. X} (3. or m({x2. x3}.11) Where. x3}). before it is possible to access the E2 evidence. For example. {x1. m(X)=0.0.

and. at this time. the frame of discernment consists of three elementary propositions.25 m({x1})=1. m(X) For example. when there is a murder case and there are three suspects. then we can be sure that x1 is the murderer with a degree of belief from the testimony.ignorance about the rest of the BBA degree 0. the remaining 0. we do not 35 .25 By E2 evidence Supporting x1 Supporting x2 m({x2})=0. If there is a witness (E1 evidence). because the evidence given by the witness is just for suspect x1. m({x1}) or m({x3}).75 E1 Evidence may or may not support proposition x1 Uncertainty: m(X)=0. X={x1. x2}). Is {x1} true? m({x1})= By E1 evidence 0. Hence.25 to m({x1. our degree of belief can be changed. x2.25 BBA should be given to proposition X to express our degree of Uncertainty. x3}.75 Figure 3.25.75. and he gave his testimony just for x1. When we find other witnesses (E2 evidence).0 m({x1})=0. and he did not testify against suspects x2 and x3. let’s say m({x1}) is 0. where {x1} means that x1 is the murderer. we cannot assign the remaining 0. However.3 Degree of Uncertainty.

because there is uncertainty in the information.3. the following properties are summarized: 1) Additivity does not necessarily hold: m({x1})+m({x2})≠m({x1.25. it seems that the evidence related with {x3} is not available. It should be included in the degree of Uncertainty. On the other hand. {x2}. The BBA can be determined by various information: sources. m({x1})=0. a BBA structure with X={x1. that is. quantity of information. x2}). x2}) (3.13) In probability theory. and so on. In this BBA structure. x3} can be also given like this. quality of information.3.know how it will be changed. methods. it is not necessarily true.12) The function m satisfies the three axioms. m({x1})+m({x2}) is not the same as m({x1. additivity is one of the axioms [ p(a)+p(b)=p(a∪b) ].1. x2})=0. We accept our deficiency in knowledge and information to produce a perfect and complete opinion. m(X)=0. m({x2})=0. For another example. m is a basic belief assignment function. or {x3}. in evidence theory. For instance. Thus. m({x1. which means that we have no idea about the remaining 0. The BBA for 36 . x2. At least we know that the other testimonies (E2 evidence) can support {x1}. m({x3}) is zero. With these examples of a BBA structure.3.5.1 (3.

When we are interested in the degree of belief in {x1. The evidence for {x1} is not transmitted to {x1. p(x1). x2} (3. x2} is not obtained by adding up m({x1}) and m({x2}). x2}. However. Hence. then BBA structure will be the PDF of probability theory. x2}.2. rather it is obtained from evidence for {x1. m({x1. In evidence theory. if we handle only aleatory uncertainties and there is sufficient information for all elementary propositions. x2}) even though {x1} is a subset of {x1. and evidence for {x1. it is possible that m({x1}) ≥ m({x1. 37 . shown in Fig. x2}). and “degree of belief” for the proposition in {x1. x2}) even though {x1} is a subset of {x1. x2} to its subsets. Therefore. x2}) can be taken as one’s total degree of belief by the belief function. we cannot determine any distribution of the BBA of proposition {x1.3 how BBAs are assessed with the given partial evidence. x2} to its subsets. x2} by making use of the Belief and Plausibility functions. probability for x1. In evidence theory. x2} and the evidence for {x1.3. p({x1. x2}) can be both “degree of uncertainty” between {x1} and {x2}. x2} might be independent of m({x1}) and m({x2}). 2) Monotonicity does not necessarily hold: m({x1})≥m({x1.14) In probability theory. x2} also does not affect its subsets {x1} and {x2}.proposition {x1. we cannot determine any distribution of the BBA of proposition {x1. In evidence theory. it is shown in Fig. 3. then m({x1. x2}. should always be smaller than probability for x1 ∪ x2.3.

3) It is not required that m(X)=1, but m(X) ≤ 1:

In probability theory, p(∅)=0 implies that p(X)=1. However, in evidence theory,
this implication is not accepted. The BBA can be assigned only with reasonable evidence
or other information.

In summary, BBA is not probability, but it is just a belief in a particular
proposition irrespective of other propositions. In evidence theory, BBA is not the final
goal in which we are interested, but it expresses a portion of the total belief exactly
assigned to a proposition. The final goal is to determine a bound with degrees of belief
and plausibility by considering all of the possible beliefs that may be partial and
incomplete. By contrast, in probability theory, we finally obtain a single value of
probability for a proposition.

The BBA structure enables the flexibility to express belief for possible
propositions with the given partial and insufficient evidence, and it also makes it possible
for us to avoid making excessive or baseless assumptions when assigning our belief to
propositions. With the flexibility, the BBA structure can be successfully used to express
typical partial-belief structures. For instance, with a frame of discernment, X={x1, x2, x3,
x4, x5}, the following BBA structures are valid:

38

Probabilistic BBA Structure
BBAs are assigned to all of the elementary propositions

m({x1})=0.7, m({x2})=0.2, m({x3})=0.1, m({x4})=0.2, m({x5})=0.1

m({x1})

m({x2})

m({x3})

m({x4})

m({x5})

Figure 3.4 Probabilistic BBA Structure

Complementary BBA Structure
BBAs are given to a subset of X and its complementary subset. The
complementary belief structure is not necessarily a probabilistic BBA structure
because the subset is not always a single, elementary proposition.

m({x1, x3})=0.7, m({x2, x4, x5})=0.3

m({x1, x3})
m({x2, x4, x5})

Figure 3.5 Complementary BBA Structure

Consonant BBA Structure
BBAs are given to subsets which are consonant subsets to each other.

39

m({x3})=0.2, m({x2, x3, x4})=0.3, m({x1, x2, x3, x4, x5})=0.5

m({x1})
m({x2})


m({xn})

Figure 3.6 Consonant BBA Structure

General Belief Structure
In this BBA structure, the BBA can be assigned in any way: discontinuous,
partially consonant, or partially overlapped.

m({x1})=0.7, m({x1, x3})=0.2, m({x4, x5})=0.1

m({x1})

m({x3}) …

m({xn})

m({x2})

Figure 3.7 General BBA Structure

3.4 Combination of Evidence

Different BBA structures can be obtained from several independent knowledge
sources over the same frame of discernment. In evidence theory, the combination of
40

evidence or information is still an open question and there is no unique method as there is
in probability theory. Initially, Dempster introduced Dempster’s rule of combination,
which enables us to compute the orthogonal sum of given belief structures from multiple
sources. After that, several combination rules have been introduced to overcome the
criticism of Dempster’s rule of combining [35]. Recently, Sentz and Ferson [36] surveyed
combination rules by defining types of evidence and investigated Dempster’s rule of
combining by comparing the algebraic properties with other combination rules. Here,
some of the combination rules are introduced.

3.4.1 Dempster’s rule of combining

Two BBA structures, m1 and m2, given by two different evidence sources, can be
fused by Demspster’s rule of combining in order to make a new BBA structure, as shown
in Eq. (3.4.1),

m(A) =

m1 (C i )m 2 (C j )

Ci ∩ C j = A

1−

m1 (C i ) m 2 (C j )

, A≠∅

(3.4.1)

Ci ∩C j =∅

where Ci and Cj denote propositions from each source. In Eq. (3.4.1), the denominator
can be viewed as a contradiction or conflict among the information given by independent
knowledge sources. Even when some conflicts are found among the information,
Dempster’s rule disregards every contradiction by normalizing with the complementary
degree of contradiction because it is designed to use consistent opinions from different
41

we assume that there is enough consistency among given sources to use Dempster’s rule of combining. Dempster’s rule can be appropriate to a situation in which there is some consistency and sufficient agreement among the opinions of different sources.sources as much as possible. 37]. On the other hand.4. Yager [35] argued that the conflict or contradiction comes from our ignorance: thus. this normalization can cause a counterintuitive and numerically unstable combining of information when the given information from independent sources contains extreme contradictions or conflicts [35. which implies total ignorance. Yager [35] has proposed an alternative rule of combination in which all contradiction is attributed to total ignorance. q (∅) ≥ 0 (3. i. instead of normalizing out the contradiction. However.4.e. 3.2 Yager’s rule of combination The main difference between Dempster’s rule of combining and Yager’s rule of combination is in the handling of contradiction in given belief structures. The ground probability mass assignment (q) is introduced in Yager’s formulation and has different properties that allow the ground probability mass assignment of the null set to be greater than 0.2) Yager’s rule of combination is given by: 42 . In other words. he allocates the contradicted portion to the frame of discernment (X or Ω). In this paper.

X (3.5) 3. f (C ) ≥ 0 (3.4.6) f (C ) = 1 .q (C ) = m1 ( A)m 2 ( B ) (3.4. which include both Dempster’s rule and Yager’s rule. Any rule of combination can thus be expressed as: m(C ) = q (C ) + f (C )q (∅) .4.4. Inagaki uses Yager’s ground probability assignment (q) and develops a rule of combination in a systematic manner. C≠∅ (3.3) A ∩ B =C m(C ) = q (C ) for C≠∅.4.7) C∈2 X .4) m( X ) = q ( X ) + q (∅) (3.4. f(C) denotes an allocation coefficient for proposition C in a restricted property such as m(C ) q (C ) = m( D ) q ( D ) (3.4.C ≠ ∅ where.3 Inagaki’s unified rule of combining Toshiyuki Inagaki introduced a combination rule with a continuous paramerized combination operations [38].8) 43 .

where the conflict k is defined by: k= f (C ) for any C≠X.4. 3.4.6) can be rewritten with the restriction of Eq. Inagaki found that system safety could be changed not only by the type of 44 .11) m( X ) = [1 + kq(∅)]q ( X ) + [1 + kq(∅) − k ]q (∅) (3. ∅ q (C ) (3.8.4. the above equation addresses that no knowledge is assumed regarding relative importance or credibility of propositions.10) From the above equations.8).9) where f(C) can be interpreted as a scaling function for q(∅).12) m(∅) = 0 .For any propositions C and D.4.4.4. ∅ (3. Since k is continuous-valued. q (C ) + f (C )q (∅) q ( D ) + f ( D )q (∅) = q (C ) q( D) (3. (3.13) With Inagaki’s rule. we obtain a unified rule of combination as follows: m(C ) = [1 + kq (∅)]q (C ) for C≠X. except X or ∅. (3. 0 ≤ k ≤ 1 1 − q (∅) − q ( X ) (3.4. the unified rule of combination represents infinitely many rules of combination. The general expression in Eq. Dempster’s rule is obtained by setting k=[1-q(∅)]-1. as shown in Fig. And Yager’s rule can be realized when k=0 in the above equations.

.8 Rules of Combination by Parameter k 3.14) wi mi ( A) where. The formula for the mixing combination rule is m1.n = 1 n n i =1 (3.4.. mi’s are the BBA for the belief structures and the wi’s are the weights assigned based on the credibility of the evidence. Inagaki has proposed to find an optimal value of k with safety-control policies and resulting plausibility of system event. but also by the choice of a rule of combination.safety-control policy. Yager’s rule Dempster’s rule k 0 1/[1-q(∅)] 1/[1-q(∅)-q(X)] Figure 3. Hence.4 Mixing or averaging method Information from multiple independent sources is treated as equally credible.4. 45 . and contradiction or conflict among those sources is not taken into consideration by simply averaging the given opinions.

as shown in Fig. 46 . as opposed to a single value of probability. Pl(A)]. 3. where Bel(•) and Pl(•) are given as. and this rule is not associative except at the k value that coincides with Dempster’s rule. 1]. Inagaki’s unified rule of combining gives us a useful tool to interpolate or extrapolate the rules of combination proposed by Yager and Dempster. the procedure to determine k is not well justified yet. A mixing method generalizes the averaging operation that is usually used for aleatory uncertainty by assuming a uniform distribution. It is investigated that Dempster’s rule of combination performs satisfactorily under situations of low conflict [36].9. Our total degree of belief in a proposition “A” is expressed within a bound [Bel(A).5 Belief and Plausibility Functions Due to a lack of information. Dempster’s rule of combining is selected to aggregate information from different independent sources with the assumption that there is some consistency among the given information. 3. In this study. The most crucial point in those combination methods is the reallocation of the degree of BBA regarding contradiction or conflict. Yager’s rule satisfies Quasiassociative. it is more reasonable to present bounds for the result of uncertainty quantification. hence. However. the resulting combined BBA structure may be affected by the order of combination.There are several other combination methods. Yager’s rule of combination transfers the degree of contradiction to the degree of ignorance and it seems very persuasive. when there are multiple knowledge sources. which lies in the unit interval [0. However.

BBA structure A (Shaded area) Figure 3. every proposition that allows for the proposition A to be included at least partially is considered to imply the plausibility of proposition A. We called m(Ci) a “portion” of total belief in the proposition A in the previous section. Bel(A) is obtained by a summation of the BBAs for propositions that are included in the proposition A.1) m(C i ) : Plausibility function (3. With this viewpoint. Bel(A) is our “total” degree of belief.9 Belief (Bel) and Plausibility (Pl) Due to Uncertainty.5. That is.2) Ci ⊂ A Pl ( A) = Ci ∩ A ≠ ∅ Bel(A) Uncertainty Bel(¬A) Pl(A) Figure 3. because the BBA in a proposition is not divided in any way to its subsets.Bel ( A) = m(C i ) : Belief function (3.5. The degree of plausibility Pl(A) is calculated by adding the BBAs of propositions whose intersection with the proposition A is not an empty set. the degree of belief for the proposition A and the degree of belief for a negation of the proposition A do not have to sum up to unity.10 Bel and Pl in a given BBA structure 47 .

Bel(A). Fig.1) 48 .1. For example. .1 0. Bel(A) is obtained by adding the BBAs of propositions that imply the proposition A. The belief function. these two measurements consist of lower and upper probability bounds.3.Again.3 0.1 Figure 3. In a sense.5 Result (interval) [0. assume that there are three different methods to detect the true value of x. Table 3. 1] [1.10 represents a BBA structure where the proposition A is expressed in the shaded area. 3.5 0. Pl(A) is plausibility calculated by adding the BBAs of propositions that imply or could imply the proposition A. C3. C4. x2})=0. C1. On the other hand. 3] x2 2 x3 3 0. is obtained by adding up the BBAs for C1 and C3 that are totally included in the shaded area.1 The Evidence for True Value of x Method 1 method BBA 0.1 st 2nd method 3rd method 0 x1 1 0. m({x2})=0.1 0.Assessing Bel and Pl with evidence theory For a simple numerical example.3 0. m(Ω)=0.5. and C5 are added up for Pl(A) because those propositions are partially or totally implying the proposition A. whereas. 2] [0. m({x1. 2] [0.11 BBA Structure (m({x1})=0. C2.

Bel({x3})=0. and {x3} are computed as follows.5.5 (3.3 (3.11 with a frame of discernment Ω={x1.1 (3.0 (3. we obtained test results.5. the reliability of the experiment method.5) Pl({x3})=m(Ω)=0.5 error range. where the BBA was assumed to be determined by multi-criteria evaluations. Bel({x1})=m({x1})=0.2) Bel({x2})=m({x2})=0.1. the error range can be viewed as epistemic uncertainty. as shown in Table 3. After consuming all of the available resources.5.5.3) Pl({x2})=m({x2})+m({x1.5. Therefore. 3.5 (3.x2})+m(Ω)=0. the quality of the engineers. The BBA structure is shown in Fig.6) 49 .5 error range from a median value. With this BBA structure.The first method is suspected to have a ±0.x2})+m(Ω)=0. the degrees of belief and plausibility for only elementary propositions {x1}. and so on.5. x3}.7 (3. and the third method has a ±1. x2. We have no further evidence to decide what kind of PDF exists in the error range. The second method has a ±1 error range.4) and. {x2}. and we cannot even find whether the error comes from unknown PDF of input data or from an incompletely defined model. including: the number of experiments.1) Pl({x1})=m({x1})+m({x1.

the difference between Belief and Plausibility.The Plausibility and Belief functions for each elementary proposition can be expressed as shown in Fig. becomes smaller as we obtain more information and knowledge.383 0. and Probability (Pf) in Elementary Propositions Even though evidence theory does not give us a single value. a decision-maker can obtain insight into the problem and avoid mistakes made by misusing assumptions.3 0.7 0.033 0. the probability value is always supposed to be placed between Belief and Plausibility. Plausibility (Pl). even if different distribution functions are assumed. 50 .12.0 Figure 3. Pl] retains all of the information without any excessive and baseless assumptions. Belief and Plausibility can be viewed as lower and upper bounds of probability. Since the bound represents the current uncertainty situation based on available evidence. the result of evidence theory is consistent with given partial information.1 0.12 Belief (Bel). Pl Pf Bel 0. 3. So.5 0. As mentioned before. Probabilities over the frame of discernment with the assumption of uniform distribution for BBAs are also illustrated. the given bound [Bel. The degree of Uncertainty.583 0.5 0. That is.

When only parametric uncertainties in a system model are considered. with a system model f. Y = f (X ) (4. the problem definition of UQ and the Basic Belief Assignment (BBA) structure of engineering applications are presented. the uncertainties in responses are determined by the uncertainties in input parameters.1) It is assumed that the variables in this model are independent of each other and that uncertainties exist only in system parameters.1 Problem Definition The structural responses can be expressed as a vector Y that depends upon an input vector X. First. And. 4. Structural Uncertainty Quantification Using Evidence Theory Uncertainty Quantification (UQ) using the framework of evidence theory for engineering structural systems is introduced in this chapter. some computational issues of using evidence theory are also discussed.4.1. 51 .

x15=[0. the nature of parameter uncertainty in insufficient information situations is better characterized as epistemic.Parametric uncertainty is typically included in aleatory uncertainty due to its stochastic nature. and BBAs from each expert are assigned to each interval with the mapping function m based on the available evidence.1.9 1.0]. Figure 4. as in Fig.2 BBA Structure in Engineering Applications In this work. 0.15. x12=[0. Each interval represents a proposition of the true value of an uncertain parameter. With incomplete and insufficient information. and they even may overlap.25 0. but not very accurately. x1 52 .25]. 1.1 Multiple Interval Information and BBA for an Uncertain Parameter.75 x12 0.9 1. m(x15)=0.25.0 0. x14=[0 0.5]. instead of by an approximated PDF.1.5 0. 4. The intervals can be discontinuous and scattered. the input parameters can be represented as aleatory uncertainties by crude representations to probability density functions. m(x12)=0.75].0] m(x11)=0.1.0 x13 x14 x15 x11=[0. Hence.4.4. we consider the situation that multiple intervals for an uncertain parameter in an engineering structure system are given by information sources. m(x13)=0.5 x11 0. 0. m(x14)=0. x13=[0.

In Fig. The BBA structure satisfies the three axioms of BBA structure. From the given information. such as two experts. Those propositions are emphasized by the normalization of the complementary degree of contradiction in Dempster’s rule of combining. 53 . As mentioned before. 4. because the proposition can be interpreted such that the information source has no idea on how to give specific interval propositions in the frame of discernment with available partial evidence. since the evidence is not transmitted to other propositions. the BBA of x11 can be higher than that of x14 that is including the interval proposition.0]. [0. the frame of discernment for the uncertain parameter x1 is defined as an interval.1. respectively. The BBA for an interval proposition is not distributed over the interval with any distribution function. 1. Multiple sources for the BBA structure. the BBA of the proposition is also viewed as the degree of ignorance. x11. It is the basic concept in Dempster’s rule of combining that the propositions in agreement with other information sources are given more credence. Dempster’s rule of combining fuses interval information from independent sources without refining the intervals. are assumed. It is employed because there is no assumed distribution function of BBA within an interval. the BBA structure will express an unidentified PDF acceptably. When enough discretized intervals are obtained from available evidence. the two subscripts are the indications for an uncertain parameter and interval. When a proposition like x15 in given information indicates the frame of discernment.

After obtaining the combined BBA structure for each uncertain parameter. xc 2 n ∈ xc 2 } (4. For example.3. the degree of plausibility and the degree of belief. xn] ∈X} (4.2.2.1) And. xn] ∈X} (4.3. = xc1 × xc 2 = {ck = [ xc1m . UF ={y : y=f(x) >v and x=[x1. the BBA for the joint proposition set is defined by mc (ck ) = m( xc1m )m( xc 2 n ) (4. is constructed for the structural system model by using the Cartesian product of each uncertain parameter. are obtained by setting the XF set of uncertain input vectors and the UF set of a failure system response. for only two uncertain parameters.…. The joint BBA structure must follow the three axioms of BBA structure. the joint proposition is defined as. v.2) 54 .2) 4.1) XF ={x : y=f(x) >v and x=[x1. xci. as in Eqs. (4.1) and (4. xc 2 n ] : xc1m ∈ xc1 . the joint proposition.2). The failure occurrence of a target system response is defined with a limit-state value. x2.3.….3 Evaluation of Belief and Plausibility Functions The two measurements of evidence theory.3. x2.

it is observed that joint propositions c2.3) and (4. v.After determining the sets. Graphically. XF and UF. 55 .3. by comparing the range of system responses with the limit-state value. Bel(U F ) = Pl (U F ) = m (c k ) (4. (4.4) c ck :ck ⊂ X F .4. c3.ck ∈ c k ck :ck ∩ X F ≠∅ . max [ f(ck)] ] (4.5) Then. the Belief and Plausibility functions are calculated.3.ck ∈ Since the uncertain parameters in a joint proposition are continuous in an engineering application.3) m (c ) (4. each joint proposition will be evaluated as to whether the response range of the joint proposition is included in the UF set partially or entirely. ymin] = [ min [ f(ck)]. For instance. the Belief and Plausibility functions are evaluated by checking all propositions of the joint BBA structure. and c6 are partially included in the UF set and the BBAs for those propositions will be added up for the degree of plausibility. when joint propositions ck border on each other in two-dimensional uncertain parameter space as shown in Fig.4).3. [ymax.2. it is required in the evaluation of Belief and Plausibility functions to find the maximum and minimum responses over the joint proposition range. as given in Eqs.3.3.

in order to alleviate the computational requirement without sacrificing accuracy. sampling method [7. the vertex method [39] can be used to find the system response range. 29. Those two techniques may require intolerable computational effort in a complex and large-scale system. sampling or sub-optimization techniques can be applied to find the maximum and minimum range values in each joint proposition.2 Failure Set. when the limit-state function is expressed as a nonlinear function. 28. a surrogate model can be introduced by taking advantage of available approximation methods. UF c4 c5 c6 c7 c8 c9 a Figure 4. and so forth. However. the function space defined by the frame of discernment 56 .b c1 c3 c2 Failure set. Hence. as in many engineering applications. 40]. When a system response is continuous and monotonic with respect to every uncertain parameter. UF and Joint BBA Structure for Two Uncertain Parameters Several methods have been proposed to find the system response range for each joint proposition in engineering applications: the vertex method [39]. optimization method [36]. To secure the accuracy of a surrogate model.

c k ∈C [ Bel . After combining the information for each parameter. 57 .for a joint BBA structure can be divided into several sub-spaces. Pl ] Figure 4. Pl_dec. Given Information Constructe & Combine BBA Structure Define Structural System Failure Set (Uf) & Function Evaluation Space (FES) Assess Bel & Pl For Failure Region Evaluate Bel and Pl functions Bel (U f ) = c k :c k ⊃ f −1 m(c ) Pl (U f ) = k (U f ).3. Applying the vertex method is justified by the assumption that the target system model has linear variations in a small function space of each joint proposition with respect to every uncertain parameter. since it is our intention to introduce the BBA structure and two measures of evidence theory to an uncertainty quantification problem of an engineering application. The following are the major steps in evaluating uncertainty using evidence theory. c k ∈C m (c ) c k :c k ∩ f −1 FEM Analyzer k (U f ) ≠ 0 . 4. The summary of the uncertainty quantification scheme using evidence theory is presented in Fig.3 Uncertainty Quantification Algorithm in Evidence Theory However. and surrogate models will be constructed over the sub-spaces. the simple vertex method is used in the following example.

4.4 Numerical Example Fig. which represent moments of aerodynamic lifting forces.4 shows the structural model of an Intermediate Complexity Wing (ICW). The degrees of plausibility and belief are obtained by checking all of the joint propositions with the Belief and Plausibility functions. Static loads. 58 .the joint BBA structure is constructed under the assumption of independency of uncertain parameters. and the tip displacement at the identified point in Fig. Tip displacement Upper wing skin Spars and Ribs Tip part (t1) Lower wing skin Root part (t2) Wing Root Figure 4. are applied along the surface nodes. Root chord nodes are constrained as supports. The joint BBA structure must follow the three axioms of BBA structure. There are 62 quadrilateral composite membrane elements ([0°/90°/±45°]) for upper and lower skins and 55 shear elements for ribs and spars. The function evaluation spaces are determined by constructing the joint BBA structure.4 ICW Structure Model 4.

as shown in Fig.04 0.7 PID: E21 BBA: 0.9 PID: E11 1. 4.1 Expert2 1. The interval information is considered to be the most appropriate way to express those uncertainties based on insufficient evidence.5 E15 E14 0. the load factor.8 1.8 0. so there are two uncertain factors for the tip and the root regions.9 1.2 E23 E24 E25 0.4 is considered as a limit-state response function. The nominal value for each parameter is fixed and the real values are obtained by multiplying with the uncertain scale factors.2 1. the nominal value of elastic modulus is 1.0 0. The interval information for elastic modulus and 59 .1 E13 E12 BBA: 0.7 0. Expert2) give their uncertain information for the four parameters with discontinuous and discrete intervals.0 1. Physical linking is used for the skin thicknesses. because the available data for the parameters is not enough to predict any variability.5 0.4 We consider the situation in which two experts (Expert1.1 Figure 4. It is assumed that there are four uncertain parameters: the elastic modulus factor.5 Elastic Modulus Factor Information Two equally credible experts are assumed to give their opinion with multiple intervals for each uncertain parameter.4.85×107 (psi).2 0.2 0.5 E22 0.025 0.025 0.2 1.02 1. for instance.25 0. the tip and the root region of the wing skin thickness factors.14 0. Expert1 0.7 0.

1 1. 4.5 1. 0. There is a discontinuous interval [0.075 Expert2 1. That is.3 0.0 1.0 0. E11 indicates the first expert’s first interval proposition for E factor. Because of the lack of information. Expert1 0. there is no evidence from Expert1 that supports the proposition that the elastic modulus factor exists in that interval.5.02 0.5 PID: 0.1 1. The tip and root skin thickness factors information is shown in Tables 4.0 1.6 P23 P24 P26 0.02 P12 P13 P15 P16 0.6 where PID denotes an indicator of each interval.01 P22 2.load are given in Figs.1 Figure 4.8] that is not covered by Expert1’s opinion.8 1.4 0.1 and 4. 4. the interval information in evidence theory may not be continuous and intervals can overlap. 60 .0 0.2. This scheme allows us to express our opinion intuitively and realistically for given partial information without making additional assumptions.6 PID: P11 BBA: 0.3 1.07 2.4 0.5 1.2 P 14 0. the BBA of E23 is higher than that of E22 because the evidence that is supporting the interval E23 is independent of the evidence supporting the interval E22. In Fig.8 P21 BBA: 0.3 1.6 Load Factor Information As mentioned before.4 P 25 0.005 0. even though the interval E22 includes the interval E23.7.5 0.5 and 4.

0355 Ec3 0.03 [1.95.0014 Ec2 0.0.1.1] 0.5 Ec6 Ec2 Ec1 0.007 Figure 4.2 0.7. 1.05 0.0. 1.07 [0.1.9 Interval [0.7 The opinions from two different experts are consolidated by using Dempster’s rule of combining.9.8 1.1.7 Combined Information for Elastic Modulus Factor 61 .Table 4.0.7 and 4.0 0.9] 0.1. 4. 0. the combined information for elastic modulus and load is given in Figs.85 [1.2] 0.0.0] [0.0.1.8. For example.08 [0.0] 0.05] BBA 0.0057 BBAs Ec6 0.2 Root Wing Skin Thickness Factor (t2) Expert1 Expert2 Interval BBA Interval BBA Interval BBA [0.95] [0.1.8.8173 Ec4 0.1.2 [1.82 [0.95.9.1 Tip Wing Skin Thickness Factor (t1) Interval [0.1 [1.05] Expert2 BBA 0.2] 0.05 1.8.3] 0.2 Ec5 1.1] 0.05 [1.9 Ec1 Ec3 1.9] 0.7.1 0.1.1.05 Expert1 Table 4.3] 0.1393 Ec5 0.1 Ec4 1.7 0.

the vertex method requires 1800 function evaluations that are performed by using ASTROS.0621 Pc2 0.0002 Figure 4.5″.0005 Pc5 0.2) m c (c k ) ε k :ε k ∩ X F ≠ ∅ In this structural analysis problem with four uncertain parameters.4. Pl (U F ) = (4.0 Pc3 Pc1 1. force. and thickness. U F = {disp Tip : disp Tip ≥ 0.8 Combined Information for Load Factor The structural analyses were conducted by using ASTROS [42] to obtain the tip displacements.3 1.5 0.0 1.0034 Pc3 0.5 Pc8 Pc7 Pc2 Pc1 0.0.4427 Pc7 0. our goal is to obtain an assessment of the likelihood that the tip displacement exceeds the limit-state value of 0. Here.6 1.1) This goal is realized by obtaining the plausibility Pl (U F ) for the set of U F with the joint BBA structure for uncertain parameters: elastic modulus.4744 Pc8 0.5′′} (4. the 62 .1 Pc4 Pc5 Pc6 2.0136 BBAs Pc4 0. As a result.4.0032 Pc6 0.8 1.

Thus. belief and probability. 63 .0236 with the given body of evidence. the plausibility value 0.9.0 (v = 1.9.. The complementary cumulative functions for plausibility and belief (CCPF & CCBF) are defined with the functions Pl and Bel for the set UFv with respect to the varying value of v∈U. 4. 4.0001 belief for the failure.3) and (4. where U and UFv are defined as in Eqs. x 2 .4) The CCPF and CCBF functions are illustrated in Fig. By increasing the available information. This result shows that degree of plausibility is 0. 4. x = {x1 .4. plausibility.0001 and the plausibility is 0.9.4). The difference between plausibility and belief can be viewed as the degree of Uncertainty as shown in Fig. Belief and Plausibility can be accepted as lower and upper bounds of an unspecified probability. v ∈ U } (4. CCPF can be interpreted in the same way as cumulative distribution function (CDF) in probability theory. Ignorance varies with limit function value v in Fig.3) U Fv = {disp Tip : disp Tip ≥ v.0).0236 for the tip displacement limitstate violation.9.4. Uncertainty will ultimately be zero. x n ) ∈ X } (4.4.0236 for exceeding the tip displacement limitstate. (4.4. will have the same value..0016 is determined by the axis of plausibility of y < v.. U = {disp Tip : disp Tip = f ( x). as indicated in Fig. For instance.. when we want the plausibility for the occurrence y > 1.belief is 0. 4.0001 and as high as 0. whereas there is at least 0. Uncertainty reflects the lack of confidence in the result of the analysis. a probability for U F can be as low as 0. and the three measures.

we can say that the bound result from evidence theory is reasonably consistent with the given partial information. it is difficult to make a decision when Uncertainty is too large.0016 . Hence. The two measures from evidence theory bracket the failure probability values that could result from any assumed probability distributions within the given interval information. we can obtain and apply the insight regarding the possible uncertainty in a system response. 64 .9 Complementary Cumulative Plausibility and Belief Functions In some cases. However.Pl (U Fv ) = 0. the bound [Bel(UFv). Pl(UFv)] is obtained based on given evidence and without any assumptions. v = 1. With this bound.0 • CCPF (Solid line) / CCBF (Dashed line) for the occurrence y > v Plausibility & Belief for y < v v Figure 4.

29. 44]. Computational Fluid Dynamics (CFD). such as Finite Element Analysis (FEA). 65 . 40] and the vertex method [39].5. are explored and developed. the resulting uncertainty in a system is usually quantified by many repetitive system simulations for all of the possible propositions given by BBA structures of uncertain variables. The popular numerical methods of calculating the resulting uncertainty using evidence theory are the sampling method [7. General reviews of reanalysis methods can be found in literature [43. System Reanalysis Methods for Reliability Analysis Unlike probability theory. efficient computational tools. which cannot be expressed by any explicit function. There are two general categories of reanalysis techniques: surrogate-based methods and coefficient matrix-based methods. system reanalysis techniques. systems are usually numerically simulated with high fidelity tools. The computational cost of UQ analysis using the sampling method can be prohibitive in most engineering structural systems. 28. in evidence theory. Hence. in modern structural designs. Hence in this work. However. the uncertainty in a system is propagated through a discrete Basic Belief Assignment (BBA) structure. and so on.

41] or Design of Experiments [4749]. 55]. The valid bounds depend on the efficiency of the surrogate method and the characteristics of the original system. 25. The approximation model is usually constructed as a simple. and the convergence rate might be slow or even divergent in certain numerical conditions 66 . However. Iterative methods are found to be effective for a small degree of changes in a design and for a sparse stiffness matrix. 40. and so forth. One of the robust ways to increase the accuracy and efficiency of an applied surrogate-based method is to provide more simulation data.Surrogate-based methods generally construct an approximation model of a specific response of a target system with minimum interactions with an original system analyzer. the Sherman-Morrison and Woodbury (SMW) formulas [54. Once the surrogate model is obtained. Coefficient matrix-based methods include iterative methods [50-53]. in coefficient matrix-based methods of structural system reanalysis techniques. the system response of interest can be regenerated without the actual simulation.” such a computer intensive Finite Element Analysis (FEA). Combined Approximation (CA) method [56]. closedform equation based on series expansions [22. On the other hand. optimization based on reliability. Surrogate-based methods are extensively demonstrated in engineering disciplines and successively applied to many engineering designs. the solutions of the surrogate-based methods are valid only within certain bounds. However. reliability analysis. the response of the modified system is obtained by using a special linear system solver for the discretized system directly. the iterative procedure should be continued until the solutions are converged. such as optimization. and so forth. which is usually a “black box.

SMI. The application of SMW is limited to modifications on either extremely small portions of an initial structure or a specific type of element (truss) in FEA. In this work. Moreover. 67 . sequential reanalyses for modifications of different parts of the structure. By employing SMI in an iterative method. which is an improvement of SMW. is developed with the capability to update both the inverse of the modified stiffness matrix and the modified response vector efficiently. if the displacement vector rather than the inverse of the modified stiffness matrix is updated. cannot be performed successively.of the stiffness matrix. but originated from the binomial series expansion. which are the main processes of optimization and reliability analysis. However.59] have been developed to compute the modified displacement vector instead of the modified inverse matrix of the global stiffness matrix. Some techniques using the SMW formulas [58. and an iterative method are coupled is also developed and presented in this chapter. most FEA solvers do not obtain the inverse matrix directly but use a decomposition method to solve the FE equilibrium equations. the Combined Iterative (CI) method in which a direct matrix method. the Successive Matrix Inversion (SMI) method. Since the SMW formulas have been introduced. there have been many efforts to incorporate SMW in structural reanalysis [57].

the generalized convex approximation [61]. There are several one-point approximations (linear.1. and conservative) which can be constructed with function value and gradients information.5.1 Two-Point Adaptive Nonlinear Approximation (TANA) TANA with adaptive intervening variables has the capability of adjusting its nonlinearity to any target function automatically by using two-point information. In this work. The simplest approximation using gradient information is the linear approximation based on a first-order Taylor series expansion. There are several approximation methods in which the information from two design points is used. However. such as second-order gradient information. such as the Two-point Exponential Approximation (TPEA) method by Fadel et al [60]. The intervening variables are defined as 68 . we have function and gradient information at more than one point. 5. TANA presented by Wang and Grandhi is employed for the surrogate-based method. reciprocal. The accuracy of these one-point approximation can be increased by adding higher order gradient information. Since most of the nonlinear solutions of engineering systems are sequential and iterative. the computational cost could be expensive to obtain the higher order gradient in many engineering problems. and Two-Point Adaptive Nonlinear Approximation (TANA) method [21-26].1 Surrogate-Based Reanalysis Techniques Most of surrogate-based techniques are based on a polynomial expansion or Taylor series expansion at a given design point.

1.1.1.2 ) (5. The first-order Taylor series is expanded at the second point.1. that is. …. which is the same for all variables. 69 . 2. X2 in terms of the intervening variables.2) Y2 We can apply the Chain Rule to obtain the function with the physical variables as ∂yi = r xir −1 ∂xi ∂g ∂g ∂xi ∂g 1 1− r = = xi ∂y i ∂xi ∂y i ∂xi r (5. yi. g I (Y ) = g (Y2 ) + n i =1 ∂g ∂y i ( yi − y i.y i = xir .−2r ( xi − x i . the TANA function is 1 g~T ( X ) = g ( X 2 ) + r n i =1 r r xi1. n (5. 2 ) ∂g ∂xi (5. r is numerically calculated so that the difference of the exact and approximate gT(X) at the previous point X1 is zero or minimized.4) X2 The unknown nonlinearity index is determined by matching the function value of the previous design point. i=1.1) where r denotes the nonlinearity index.3) By substituting the intervening variables with the physical variables.

in the improved TANA. which are different for each design variables in developing the approximations.1 − x i . 2 ) ∂g ∂xi =0 (5. The following intervening variables are defined as yi = xipi i = 1. .−1 pi X1 pi ( xi pi − x i .6) where pi is the nonlinear index for each design variable.2 Improved Two-Point Adaptive Nonlinear Approximation (TANA1 & TANA2) As mentioned earlier.1. such as TANA1 and TANA2.5) X2 Therefore.1.1. However.n (5.g(X1) − g( X 2 ) + 1 r n r i =1 r xi1.1. both function and derivative values of two points are utilized to determine the nonlinear indices. The approximate function is assumed as g~T 1 ( X ) = g ( X 1 ) + n i =1 ∂g ∂xi xi1.1 i ) + ε 1 p 70 (5.7) . r can be any positive or negative real number (not equal to zero). 5. TANA uses the same nonlinearity index for all design variables and only the function values at the previous design point are matched to construct the approximation.−2r ( xi .

7). so a difference between the exact and approximate function values at the current point may exist. Unlike the other two-point approximations.1.n (5.n (5. ε1 is computed by matching the approximate and exact function values at the current point. the approximate function value would not be equal to the exact function value at the expanding point because of the correction term ε1.1. (5.7) matches only the derivative values of the current point. representing the residue of the first-order Taylor approximation in terms of the intervening variables yi.1 pi −1 ∂g ∂xi i = 1.1 ∂xi ∂xi pi −1 ∂g ∂xi i = 1. .1. (5.9) X1 Eq.where ε1 is a constant. Eq.7) has n equations and n unknown nonlinearity indices.1.2 xi . this approximation is expanded at the previous point X1 instead of the current point X2. (5.1. 2. This difference is eliminated by adding the correct term ε1 in the approximation. The n equations can be solved by using any numerical techniques. 71 . . Then. pi can be evaluated by letting the exact derivatives at X2 equal the approximation derivatives at this point x ∂g ( X 2 ) ∂g T 1 ( X 2 ) = = i. The reason is that if the approximation is constructed at X2. the derivative of the approximate function with respect to the ith design variable xi is written as xi ∂g T 1 ( X ) = ∂xi xi .8) X1 From this equation. 2. By differentiating Eq.

The approximate function is given as n gT 2 ( X ) = g ( X 2 ) + i =1 xi1.11) The approximation is a second-order Taylor series expansion in which the Hessian matrix has only diagonal elements of the same value ε2.1.−2pi X2 1 p p ( xi . TANA2 uses the same intervening variables used in TANA1.1 i − x i . the approximate function and derivative values are equal to the exact values at the current point.10) TANA1 is simple to formulate.ε1 = g ( X 2 ) − g ( X 1 ) + n i =1 ∂g ∂xi xi1. 2 i ) + ε 2 pi 2 n i =1 p p ( xi i − x i . 2.1.1. 2 pi −1 g( X1) = g( X 2 ) + n i =1 ∂g ∂xi ∂g ∂xi + ε 2 ( xip. . 2 i ) 2 (5.1i −1 pi i = 1. and more importantly. As in TANA1.1 ∂xi xi . 2i ) xip.n (5.13) X2 xi1.1i − xip.1 i ) (5.−1 pi X1 pi p p ( xi .1. there are n+1 unknown constants and they are obtained by using the following equations. x ∂g ( X 1 ) = i . 2 i ) + ε 2 pi 2 72 n i =1 p ( xi . 2 i ) 2 . 2 i − x i .1 i − x i .12) p (5.−2pi ∂g ∂xi X2 1 p p ( xi i − x i .

5. plate.13). In TANA2 method.1) presented by Haftka and Gurdal [68] is selected to compare the accuracy of various approximations. u p y. v Young’s modulus : E 8p Figure 5.From Eq. respectively at both points. The TANA method and its variations (TANA1 and TANA2) have been extensively used in truss.3 Numerical Example A three bar truss (Fig. and turbine blade structural optimization and probabilistic design.12).1. 5. the exact function and derivative values are equal to the appropriate function and derivative values. frame. The results presented in Refs.1.1 Three Bar Truss 73 . A B C A2 l A1 60° A1 60° x. the nonlinearity indices are determined and the diagonal element of the Hessian matrix is from Eq. (5. (5. [21-26] demonstrate the accuracy and adaptive nature of building nonlinear approximation.1. this approximation is more accurate than others. Therefore.

3109].1.00. two design points are selected for approximate functions. The function and derivative values at the design points are X 1 = [0.17) TANA1: pi = [-2.6224 (5.1.2574 . ε2 = 2. = 1. 1.15) ∂g ∂g = −0.18) TANA2: pi = [-0.8742. The stress of member C is required to be less than σ0 both in tension and compression. ε1 = 0. -0.0083 (5.16) X 2 = [1.3785 . The truss is designed subject to stress and displacement constraints with the design variables being the cross sectional areas A1 and A2.The horizontal force p can act either to the right or to the left.75. the constraint function of the stress in member C is written as g ( x) = 1 − σC 3 2 = 1+ − ≥0 σ0 3 x1 x 2 + 0.1.1.25 x1 (5. TANA: r = 1.0226 . After defining normalized design variables.00] : g ( X 2 ) = −0.2527].1.1.25] : g ( X 1 ) = 0.14) where x1 = A1σ 0 / p and x 2 = A2σ 0 / p .5482.28 ∂x1 ∂x 2 (5. The following constants are obtained for TANA approximations. ∂g ∂g = −0. As shown in Fig.19) 74 . = 0.9679 ∂x1 ∂x 2 (5.2. 5.5553 (5. -0.7844 . 1.

00] g(x) Figure 5.2 Two Design Points of the Three Bar Truss To compare the approximations along a straight line.1.21) . The relative error is calculated as follows Relative Error = Exact . 5.75 1.5) X 2 + (0.00 1.X1 X2 X1=[0.20) In Fig.25] X2=[1. a design point is given by the function of t.1.3.5 − t ) X 1 (5.Approximation Exact 75 (5. X = (t + 0. the relative error in the estimation from various approximation methods are shown with respect to the value of t.

3 Relative Error Plots of Various Approximation Methods 76 .Error t Error t Figure 5.

respectively. gL. gC. and gT2. The approximations. gR. TANA1 and TANA2 are indicated by gT.3. 77 . reciprocal. It is observed that the approximations have zero slope at the current design point (t=0. gT1. and quadratic reciprocal approximations. and gQR denote the approximations from one-point linear.In Fig. TANA. 5. conservative.5) and TANA2 gives the most accurate results for a wide range of design points.

BiCG. The most popular methods are Conjugate Gradient (CG) type methods (CG. but here SMI originated from the binomial series expansion. has a wider applicable range of modification than any other technique and the computational cost is significantly reduced. a number of iterative methods for solving large and sparse linear systems have been developed. An excellent review of these iterative methods can be found in the literature [53]. when a design change is arbitrarily large.2 Coefficient Matrix-Based Reanalysis Techniques In this section.5. On the other hand. 55]. the modified response can be obtained very efficiently within a few iterations by using information from the previous analysis. over the last century. a new reanalysis technique. Hence. is developed to include the capability of updating both the inverse of the modified stiffness matrix and the modified response vector. as a direct and exact matrix solver. CGS. the Successive Matrix Inversion (SMI) method. The SMI method. For a small change to the previous system in a system reanalysis. it has been desired to develop an efficient iterative solver that is combined with an exact solution technique to alleviate the difficulties of iterative methods and to improve their 78 . BiCGSTAB) [51] and Generalized Minimal Residual (GMRES) methods [52]. and it is even hard to predict whether the iterative solution is converged or not. The SMI method is an improved version of the Sherman-Morrison and Woodbury (SMW) formulas [54. Updating processes for both the modified inverse matrix and the modified response vector can be used in a combined way for many sequential reanalyses to reduce the overall computational cost. the iterative solution converges very slowly. However.

and even a diverged solution can obtain convergence. the SMI method. In this work. In a sequential analysis. and {d 0 } is the initial response vector.1) where [ K 0 ] is the initial stiffness matrix. is developed from the binomial series expansion by using the same technical concept as SMI. it is found that the convergence rate is accelerated.1 Successive Matrix Inversion Method In Finite Element Analysis (FEA).2. the target structure is changed with small modifications.2.1) is performed using FEA. the Binomial Series Iterative (BSI) method. the main idea of reanalysis techniques is to regenerate the modified system response efficiently without another complete system analysis. is introduced. By employing SMI. most of the computational cost is incurred in inverting or decomposing the stiffness matrix of an engineering structure to solve the equilibrium equations. the inversion of the stiffness matrix. From this initial analysis. Additionally in this work.2. [ K 0 ]{d 0 } = { f } (5. Assume that the initial simulation given by Eq. a new iterative technique. 5. [ K 0 ] −1 79 . So. the Combined Iterative (CI) method in which an iterative method is coupled with an exact matrix solver. (5. { f } is the force vector.performance [53]. The SMI method updates the inverse of the stiffness matrix by considering only the modified portion of the stiffness matrix for the reanalysis of the modified structure.

To evaluate the modified response vector. the structural design is changed as follows.2) where [∆K ] is the stiffness modification matrix {d } is the modified response vector.2) by [ K 0 ] −1 gives ([ I ] + [ K 0 ] −1 [∆K ]){d } = [ K 0 ] −1{ f } (5. In a sequential analysis.and the initial response vector {d 0 } are available.2.4) where [ B ] = −[ K 0 ] −1 [∆K ] (5. binomial series is considered to obtain the inverse of ([ I ] + [ K 0 ] −1 [∆K ]) as follows. 80 .3) We assume for convenience that the first m columns of [∆K] have non-zero elements. premultiplying Eq. and Neumann Series expansion. ([ I ] − [ B ]) −1 . ([ I ] − [ B ]) −1 = [ I ] + [ B ] + [ B ]2 + [ B ]3 + (5.5) This series expansion is known variously as the Binomial Series expansion.2.2. (5.2. (5. there are some limitations [62] for using this series expansion directly to find the inverse of the matrix.2.3). Geometric Series expansion. ([ K 0 ] + [∆K ]){d } = { f } (5. However. For Eq.2.

using more than three series expansion terms for finding an inverse matrix might not be prudent from a computational cost point of view.2. we define the matrix [P ] for the [B] matrix series expansion terms.. In Eq. can be calculated from the element level of the infinite series expansion terms in order to alleviate the aforementioned problems. (5. ([ I ] − [ B ]) −1 .. (5. as shown in Eq.6) The elements of [P] can be obtained as follows. the inversion of the matrix. + Bij( k ) + .2. Even if the convergence criterion is satisfied. A sufficient condition for the convergence of the series is the spectral radius of the matrix [B ] is less than unity.4). The convergence could be quite slow in some cases.7) 81 .1.2.6). (5..2. Due to the first convergence limitation. there is a valid bound on the amount of design modification allowed for using the series method. Pij = Bij(1) + Bij( 2) + Bij(3) + .. [ P ] = [ B ] + [ B ] 2 + [ B ]3 + (5. However. 2.

(5.2.2. rij( k ) = rij .2.7). as shown in Eq. rij( k ) .where Bij(k ) is the (i.8) Bij( k ) In the case where the recursive term is constant through all of the series expansion terms. Pij = Bij (1 + rij + rij2 + rij3 + rij4 + ) (5. as given in Eq. in a general case.2. Eq. that is. (5.10) 1 − rij However.6). (5.2. (5. The right side term (the series expansion) of Eq. is not same with the neighboring recursive terms. that is.2.10).9) is transformed into a simple expression.9) Pij term can be obtained by assuming that there exists an original equation for the series expansion of each B matrix element. the recursive term is not constant 82 . (5.2.2. is obtained as.2. (k ) ij r = Bij( k +1) (5. j)th element of [ B] k in Eq.7) can be expressed as follows. (5. The kth recursive factor in the element series expansion ( rij( k ) ) in terms of Eq. it can be easily observed that the kth recursive term. Pij = Bij (5.9).

5).2.2.but variable for the series expansion. the transformation in Eq. the B matrix also has only jth column elements. Hence. as a constant value.2.12) Each element of [P] is simply given as.13) 1− r 83 .11) j =1 where N is the total degrees of freedom in a structural model and [∆K ( j ) ] is the matrix which has non-zero elements only in the jth column. the variability of the recursive term in the series could be eliminated by decomposing the modified stiffness matrix into separate matrices as follows.10) is not valid in general to obtain the series solution. r = B jj (5. j)th element of the B matrix. (5. Pij = Bij (5. By calculating the series terms with the B matrix. When [∆K ( j ) ] is considered with the definition in Eq. (5. it is easily observed that the recursive term for the B matrix is nothing but the (j.2.2. [∆K ] = N [∆K ( j ) ] (5. However.

( j ) . {Kb ( j −1) } is the jth row vector of [ K ( j −1) ]−1 . However.2. and computational resources are wasted away in repeatedly updating the whole inverse of the modified stiffness matrix unnecessarily.2. ([ K 0 ] + [∆K ]) −1 . Furthermore. {∆K ( j ) } is the jth column vector of [∆K ] . Since the inverse of the modified stiffness matrix. and the subscript j indicates the jth element in a vector. it is noted that most FEA solvers do not actually obtain the inverse matrix. The required number of successive steps is the number of non-zero columns in [∆K ] . 84 . and the initial [ K ( 0) ] −1 is given as [ K 0 ] −1 . can be obtained by setting ([ K 0 ] + [∆K ]) −1 as a new initial inverse of the stiffness matrix. ([ K 0 ] + [∆K ] + [∆K 2 ]) −1 . indicates the successive step. the inverse of the modified stiffness matrix is obtained by a successive inversion procedure using the following three equations: {B ( j ) } = −[ K ( j −1) ] −1 {∆K ( j ) } (5.Due to the decomposed column vector of [∆K ] .16) where the superscript. is obtained in SMI.14) {Bs ( j ) } = {B ( j ) } /(1 − {B ( j ) } j ) (5.2.15) [ K ( j ) ] −1 = [ K ( j −1) ]−1 +{Bs ( j ) }{Kb ( j −1) }T (5. then for the next modification [∆K 2 ] the inverse of the second modified stiffness matrix.

18) {Bs ( j ) } = {B ( j ) } /(1 − {B ( j ) } j ) (5. The column vector of [B] is required to be altered by the influence matrix. can be obtained as [ S ][ K 0 ] −1 . The ultimate purpose of this formulation is to obtain the influence matrix. Note that the modified stiffness matrix.17). is updated successively.3) is the initial response. which is initially [I]. by decomposing [B] into column vectors. which updates the initial response to the modified response with respect to the given modification matrix. ([ K 0 ] + [∆K ]) −1 . The modification matrix is [B] = − [ K 0 ] −1[∆K ] and the right side of Eq. {d } = [ S ]{d 0 } (5.3).20) where {Sb ( j −1) } is the jth row vector of [ S ( j −1) ] .17) As in the SMI procedure. [∆K 2 ] .2. the influence matrix is updated from the initial identity matrix by successive procedures with the following three equations.18). Compared to the previous procedure (Eqs (5.19) [ S ( j ) ] = [ S ( j −1) ] + {Bs ( j ) }{Sb ( j −1) }T (5. {d0}.16)). this procedure is more cost effective because only the influence matrix. (5. {B ( j ) } = [ S ( j −1) ]{B ( j ) } (5.2. (5. at each step.2.2.14).2.2.2. rather than the whole inverse of the stiffness matrix. as shown in Eq.Therefore. [S] = ([ I ] + [ B ]) −1 . as shown in Eq. For the next modification. the 85 . we formulate another problem whose initial matrix is [I].2. (5. (5.2.(5.2. for Eq.

the first column vector of [B] is not changed because [P] is empty. Thus.2. (5. updating the influence matrix.20) the updated [S] matrix. However.18).2. ([ K 0 ] + [∆K ] + [∆K 2 ]) −1 . (5. Hence. (5.2. This means that the influence matrix makes it possible to perform sequential reanalyses. no additional cost is required beyond the computation cost of Eq. (5. The influence vector storage matrix. is skipped. but a new influence vector storage matrix and a new vector-updating operator are introduced. However. [P].2. Because of this updated influence matrix [S (j)] at the next step. (5. then the process of updating the influence matrix. can be tackled through the influence matrix.18). the column vector of [B] is directly changed by matrix-vector multiplication.2. but also the inverse of the modified stiffness matrix. close examination of the equations reveals that in Eq. (5.20). which eventually becomes N×m matrix for m columns modification. is also unnecessary. Eq.20). Eq.inverse of the second modified stiffness matrix. can be avoided. to save the unnecessary computational cost in the SMI procedure.2. can be computed as [ S 2 ][S ][ K 0 ] −1 sequentially. if a successive vector-updating scheme is employed instead of Eqs.2. The jth column of [ S ( j −1) ] is filled up with {Bs ( j ) } and the previously updated (j-1) columns in [S (j-1)] are updated due to the jth column {Bs ( j ) } . Since the successive scheme is updating only vectors. which is started with an identity matrix. as shown in Eq. starts with an empty zero-order matrix.20).18) and (5. not only the modified response vector. At the first updating step. and the 86 . which is a kind of simultaneous superposition operation for the jth column of [B ] .

as follows: j −1 {B ( j ) } = U [ P]{B ( j ) } (5.18). {Bs (1) } . For convenience.2.23).21) The vector. (5. the second column vector of [B] is updated by the influence matrix. (5. {B (j)} is sequentially updated as shown in Eq. {B ( 2) } = {B ( 2) } + {P (1) } × {B ( 2) }2 (5.2.23).2. at the jth stage.2. {B (j)} is simply stored at the jth column of [P].18).2.2. and {Br(j)} is the updated vector{B (j)}. is stored in the first column of [P].manipulated vector. and the jth column vector of [B] is updated sequentially with the (j-1) columns of [P] one by one as follows. instead of by the simultaneous superposition operation shown in Eq. U. And then.22) where {Br(1)} is {B (j)}.2. as follows. {Bs ( 2) } = {B ( 2) } /(1 − {B ( 2) }2 ) . the operation of Eq. the influence vector storage matrix [P] simply stores the {B (j)} vector 87 . which is the same as the vector from Eq. (5. As a summary. the influence vector storage matrix becomes N×(j-1) matrix. is stored as the second column of [P].2. (5. At the jth stage. [P].22) is expressed from now on with a new successive vector-updating operator.23) i =1 After the computations for the {B (j)} vector in Eq. (5. At the next stage. {Br ( k +1) } = {Br ( k ) } + {Br ( k ) }k {P ( k ) } k =1 j −1 (5.

5.24) i=1 88 . [P] (N×m size) Figure 5. without the updating procedure. the modified response vector is obtained as follows: m {d } = U [ P]{d 0 } (5.4.2.4 Successive Matrix Inversion (SMI) Algorithm for m Columns Modification Finally. [ B ] = −[ K 0 ] −1 [ ∆ K ] . after obtaining [P] (N×m size) for all non-zero columns of [B]. The procedure for the proposed SMI method is shown in Fig.in the corresponding column and becomes a matrix with N×j size. [ P ] = [] [ B ] = { B (1 ) } + { B ( 2 ) } + { B ( 3 ) } + + { B (m ) } (m: number of columns that have non-zero elements in [B]) j=1 ~ m j −1 { B ( j ) } = U [ P ]{ B ( j ) } i =1 { P ( j ) } = { B ( j ) } /( 1 − { B ( j ) } j ) j −1 U [ P ]{B ( j ) } = {Br ( k ) } + {Br ( k ) } k {P ( k ) } k = 1 i =1 ( j − 1) where {Br(1)} is {B (1)}.

which is modified with any rank size. Fig. the computational cost depends on the size of modified rank from the initial stiffness matrix. One flop is approximately the work required to compute one addition and one multiplication. is selected as a direct complete analysis method to compare the efficiency with SMI. LU decomposition. 5. expressed by the number of the floating point operations (flops).When the non-zero columns of [∆K] are scattered randomly.5 2 shows the ratio of the computational cost of SMI to LU decomposition. 5.23). one vector that has the information of the locations of non-zero columns in [∆K] might be needed and considered in the SMI procedures. from Eq.2 Some Computational Issues of SMI in Engineering Applications The computational cost of SMI. It is found from Fig. Since the SMI method gives an exact solution for the symmetric and nonsymmetric modification matrices. and besides there is no pivoting procedure in SMI. by using the proposed SMI method. 5. the SMI method is more efficient than the conventional LU decomposition method with about 25% cost savings. 89 .5 that the reanalysis cost for 50% rank modification to the initial stiffness matrix is less than 20% of the complete analysis cost using LU decomposition. is compared with that of a popular direct method. instead of the Cholesky decomposition method. the cost of the proposed SMI method is about 1 (m − 1)mN for m columns modification in the stiffness matrix. Obviously. (5. the LU decomposition method. It is noted that even for full modification of N×N stiffness matrix. the LU decomposition method requires 2/3N3 flops to solve the system.2. For an N×N matrix.2. However.

Since the modification stiffness matrices. the sparseness of the stiffness matrix can be considered more explicitly to obtain the computational benefit. [∆K]. the 90 .Flops of SMI method × 100 (%) Flops of LU decomposit on m × 100 % N (m: number of modified columns) Figure 5.5 Relative Computational Cost Ratios of SMI to LU Decomposition In many practical engineering structural problems. However. Even though the stiffness matrix is very sparse. Most direct solvers use decomposition methods. in engineering structures are usually very sparse. computational methods such as FEM lead to a sparse matrix. it is very hard to take advantage of the sparseness because of the unpredicted fill-in within the process of decomposition. in the proposed SMI method. which has a small number of non-zero elements. For a specially structured matrix such as a diagonally banded stiffness matrix. the cost of obtaining [B] is ignored throughout this work.

as the size of the column of [P] increases in sequential modifications. many simulations may be required for sequential modifications to a target structure. Therefore. However. the sequential reanalysis can by performed by accumulating the influence vectors sequentially in [P].26) Hence. in the case of many sequential reanalyses with small 91 . and [P] for the m1 rank modification as follows. In a specific analysis. [P].2. (5. design optimization. can be computed as follows. such as reliability analysis. the first modified response. {d1}. m1 + m2 m1 + m2 i = m1 +1 i =1 {d2}= U [ P ]{d1 } = U [ P ]{d 0 } (5.2.influence vector storage matrix. can be obtained by efficient systematic computations due to the pattern of sparseness in [K]. the cost for the updating process in Eq. is obtained from the initial response. m1 {d1}= U [ P ]{d 0 } (5. the additional influence vectors are accumulated in the previous [P] matrix. and so on.2. {d0}. The second modified response. [∆K 2 ] with m2 non-zero columns. for the kth sequentially modified system with mk modified columns. For example. {d2}.25) i=1 For the next modification.23) increases exponentially.23) are required with the kth initial [P] matrix whose size becomes N×(m1+m2+…+mk-1). (5. only additional mk times updating processes in Eq. By using SMI.2.

23).2. the total cost (flops) of Tn sequential reanalyses is as follows.2. that is.2. On the other hand. the first term is for finding [P] with qN modified columns and the second term is for updating {B} in Eq.28) qN −1 i1 i1 =1 i2 =1 N + ( j − 1)qN 2 N × (d n + 1) (5. the intermediate process of updating the inverse of the modified stiffness matrix can reduce the overall computational cost by decreasing the sequential updating processes in Eq. D= K dn +1 pSMI = D j =1 (5.2.27) i1 =1 i2 =1 In the right side of the above equation. mk (= q×N) independent modified columns to the previous columns. if the inverse stiffness is updated dn times in Tn sequential reanalyses.change fractions.2. For example.2. when Tn sequential reanalyses are required with q modification ratio to N. The 92 . SSMI_cost = Tn qN −1 i1 j =1 N + ( j − 1)qN 2 N (5.2. (5. (5. the total cost (flops) is computed as follows.30) TSMI_cost = pSMI + UK (5.31) where pSMI is the cost of dn+1 SMI procedures with D sequential reanalyses at each procedure and UK is the cost of updating the inverse of the stiffness matrix dn times.23) with the previously stored updating vectors in [P].29) UK = D q N 3 d n (5.

5% of the cost of a complete solver using LU decomposition. 93 . for one complete analysis cost.2. The marginal Tm indicates that it is better to employ the process of updating the stiffness matrix for more than Tm sequential reanalyses in a sense of overall cost savings.32) It is obvious that the marginal Tm is 2/q from the above equation.marginal number of sequential reanalyses. one million simulation results can be obtained with only about the cost of 15. 1. For the sequential reanalyses more than Tm.01. i. about 66 simulation results can be obtained by SMI for q=0.000 complete analyses by using SMI. for SSMI_cost against TSMI_cost can be found by solving the following problem Tm = {Tn | SSMI_cost − TSMI_cost 3d nTn q (qTn − 2) = > 0} LU decomposition cost 4(1 + d n ) (5. In other words.33) For example.e. the minimum TSMI_cost can be obtained by a sufficient number of dn as follows: Minimum TSMI_cost = 3qTn2 (2 + q ) TSMI_cost = d n →Tn & N → ∞ LU decomposition cost 4(1 + Tn ) lim (5.. when q=0.2. Tm.01.

as shown in Fig. E=107 (psi).0 (in2).000 lb Figure 5. L=2160 in 1 6 4 11 9 2 5 3 16 14 7 10 8 20.6.000 lb Node: 7 20. for the search direction.6 Plane Truss Structure A design optimization can be performed to minimize the mass of the structure by setting the cross section areas of the rod elements as design variables.2. In most gradient-based optimization techniques.2.5. A0 = 5.6 has 30 rod elements that have an elastic modulus.000 lb 26 23 20. 5. and an initial uniform cross section area. The maximum displacement at node 7.3 Numerical Examples 5.1 Plane Truss The plane truss shown in Fig. 5. sensitivity information of the 94 .000 lb 24 17 20 13 30 22 25 18 20.000 lb 27 29 H=360 in 28 20. First. there are two major steps in every optimization iteration: finding a search direction and performing a one-dimensional search for a step size.3.000 lb 21 19 12 15 20. can be considered as a design constraint.

78 − 0.5.2.74 − 0.00 − 1.05 − 0. This means that the number of simulations for the sensitivity analysis using FDM is significantly reduced from 31 to 1. The metallic structural model of ICW is a representative wing-box structure for a fighter aircraft. the displacement at node 7. If a non-intrusive sensitivity technique such as the Finite Difference Method (FDM) is employed.7 is selected to demonstrate the efficiency of the proposed SMI method in reliability analysis.03 − 0. as shown in Fig.00 − 0.01} T where xi denotes a design variable.54 − 0.32 − 0.5% of the complete simulation cost using the popular LU decomposition technique.04 − 0.3.objective and constraint functions is usually utilized.08 − 0.2. 5.30 − 0.00 − 0.07 − 0. There are 62 quadrilateral membrane elements for upper and lower skins and 55 95 .01 − 0.34) = − 0. In this example.12 − 1. 5. the sensitivity information can be efficiently calculated with half the cost of one simulation as follows: ∂ Disp node 7 ∂ xi { − 2.5.00 − 0.00 − 0.08 − 0.10 − 0.02 − 0. the computational cost of SMI is about 1. However.00 − 0. by using the SMI method.56 − 0.00 − 2. at least 30 additional simulations are required to obtain sensitivity information in each optimization iteration.08 − 0.06 − 0.76 − 0. 5. Since each design variable makes changes in about 16% of the ranks of the stiffness coefficient matrix.72 − 0.09 (5. Ai. the sensitivity analysis of the constraint function with respect to the design variables involves structural simulations for the specified response.12 − 0.2 The Application of SMI to Reliability Analysis Using a Sampling Technique The Intermediate Complexity Wing (ICW) structure shown in Fig.

and so on. The limit-state function (G) separates the design space into failure and safe regions.2.36) G(X)<0. the probability of failure (Pf) is computed as Pf = G ( X )<0 p ( X ) dX (5. xi ∈ Safe region (5.shear elements for eight ribs and three spars. The crude MCS can be expressed as follows: 1 Pˆ f = n n i =1 I [G ( X i ) > 0] (5.2.2. numerical methods. can generally be performed to evaluate the multiple integration in Eq. such as the Monte Carlo Simulation (MCS) [7].35) G(X)=0. (5.2. Each uncertain parameter is assumed to have an independent Probability Density Function (PDF). including random loads.38).2. G(X)>0. uncertain geometric dimensions. xi ∈ Failure boundary surface (5. xi ∈ Failure region (5. material properties.39) 96 .2. In engineering structural reliability applications. Structural reliability analysis is performed to determine the probability of failure of a structure with a limit-state function in which a required performance of a target structure is defined. With the limit-state function.37) where X (∈ℜn) is a vector of uncertain parameters in the structural design.38) where p(X) is the joint probability density function of X.

Pˆ f represents the crude MCS estimator of failure probability.where Xi indicates a realization of random parameters from given PDFs. and n is the total number of MCS. Upper Skin β1 Ribs and Spans β2 β3 Lower Skin β4 β5 Figure 5. In Fig 5. five scale factors (β1~ β5) of an elastic modulus (E =1.05×107 psi) at different local parts of ICW are defined as random variables to describe a locally damaged situation. To describe the sequential procedure of MCS.7 Design Variables (βi) for Elements Under Uncertainty in the Elastic Modulus In this example. samples are obtained from the Cartesian product of the samples of each random variable generated from each PDF. a simple case that has only two random variables (β1 and β2) is shown in Fig. the proposed SMI method is applied to MCS to demonstrate its applicability to sequential repetitive reanalyses in structural reliability analysis.6. 97 . 5. In MCS.8 as an example.

which is about 0. 5. first stage reanalyses are performed from the initial design for the selected n1 samples from the PDF of β1.8% of the complete analysis cost. It is noted that as shown in Fig. 5. The total cost is about 2.8a. for the n2 samples of the second variable (β2) from the given PDF.2.68% of the complete analysis cost. which can be obtained from Eq. second stage reanalyses are performed by considering [P] from each first stage reanalysis of the first random variable. This means that if both n1 and n2 are 100. (5.8b. the total computational cost of the first stage is only the number of samples (n1) times the cost of SMI for the β1 modification.β1 β1 β2 a) First stage reanalysis b) Second stage reanalysis Figure 5. as shown in Fig. 98 . as shown in Fig. 5. Then.29).8 Sequential Computation Procedure of the SMI Method in Monte Carlo Simulation for Two Probabilistic Variables (β1 and β2) For the first random variable (β1).8a. The total cost of the second stage for β2 modifications is the total number of second stage samples (n1×n2) times the cost for sequential SMI.

2.2.2. 0.41) β2 = Uniform [0.1] (5.1] (5.then the results of 10.2. the computational cost of MCS is reduced to about 6. the limit-state function is as follows: G(β1. β4.7.0] (5. the total number of simulations is about 10 million.2.5 (in) −1 (5. β 2 .0] (5. β3. SMI is applied in the same sequential way as in the case of two random variables. For the current example with five random variables.9. Moreover. β 5 ) 11. the successive SMI analyses for the successive random variable can be assigned to separate computers for an efficient parallel computation scheme. through the sequential reanalyses using the proposed SMI method.000 simulations are obtained while incurring only the cost of less than three complete analyses through SMI. as a result. 99 . However. 1.7. gives about 0. 1. β 3 .8.42) β3 = Uniform [0. β5)= Where Disptip ( β 1 . 0.45) To obtain the failure probability for the displacement response of ICW.5% of the cost of using complete analyses without reducing the number of total samples. In this ICW example.42% failure probability.44) β5 = Normal [0.40) β1 = Normal [0. MCS is performed with 25 samples of each random variable and.43) β4 = Normal [0.8. 0.1] (5. β 4 .2. In this MCS. β2.

when a design change is large. and it is even hard to predict whether the iterative solution is converged or not. and the previous response vector can be used as an initial solution vector in the iterative procedures.3 Combined Iterative Technique Over the last century. Since the modification in a reanalysis problem is usually small. in spite of these numerical difficulties.5. However. That is. the use of iterative methods is increasing in practical applications due to several important benefits (in terms of computing time and computer storage). 100 . Unfortunately. the iterative solution can only be converged very slowly. the modified response can be obtained very efficiently within a few iterations by using information from the previous analysis. it has been desired to develop an efficient iterative solver that is combined with an exact solution technique to alleviate the difficulties of iterative methods and to improve their performance [53]. these iterative methods can be used as efficient tools for a system reanalysis by utilizing the information from the previous analysis. The most popular methods are Conjugate Gradient (CG) type methods [51] and Generalized Minimal Residual (GMRES) methods [52]. the inverse stiffness coefficient matrix of the previous system can be selected as a preconditioner to speed up the convergence. a number of iterative methods for solving large and sparse linear systems have been developed. For a small change to the previous system. Therefore.

which originated from the binomial series expansion. The SMI method.9. 5. requires computational cost proportional to the amount the system is changed. As shown in Fig. by combining the techniques from direct and iterative methods. we can expect a more robust and efficient solver for a linear system reanalysis.System Reanalysis Techniques Direct Methods Iterative Methods (SMI method) Pros High global accuracy (exact solutions) Computationally inexpensive for local modifications Performance for sequential modifications Simple and easy to understand and implement Pros • • • Cons • Cons Global modification with small design variations Approximate solution • Computational cost: O(N2) instead of O(N3) Small storage requirement Only matrix-vector products: Sparsity of a coefficient matrix Slow convergence: No finite iteration number & No guarantee of convergence Finite precision computations: Possible stagnation of iterative procedures Combined Iterative Method Partially direct solution + Iterative procedure Robustness & accuracy (adjustable solution accuracy) Small storage requirement /Small computational cost Utilization of sparsity of a coefficient matrix Simple and easy to understand and implement Figure 5. SMI.9 Combined Iterative (CI) Method It is the main objective of this section to propose the Combined Iterative (CI) method with an exact matrix solver. Also. the SMI method makes it possible to perform a sequential reanalysis for both symmetric and non-symmetric coefficient matrices by employing an Influence Vector Storage (IVS) matrix and a Successive Vector-Updating (SVU) 101 .

operator. It is found that the convergence rate is accelerated. BiCGSTAB [51]. and the intermediate solution for the partial modification can be calculated exactly by using the SVU operator. is developed from the binomial series expansion by using the same concept as SMI. the Binomial Series Iterative (BSI) method. the SMI method can be applied to only certain parts of the whole modification so that the numerical properties for an iterative process with the rest of the modification are effectively improved. Since the BSI method is also valid for non-symmetric cases. 102 . LU decomposition. The CI method from SMI and BSI shows improved efficiency and robustness through a stable iterative behavior due to simple and straightforward computations in its procedure. The IVS matrix obtained for a certain part of a whole modification from SMI can be used as a successive preconditioner in an iteration procedure for the rest of the modification. The IVS matrix can be obtained partially for any part of the given modification. a new iterative technique. the SMI method has better performance than the popular decomposition matrix. and even a diverged solution from other stationary iterative methods can be converged by the SMI method. Hence. in this work. Even for a full-rank modification in a non-symmetric coefficient matrix. the performance of BSI is compared with that of the most advanced iterative method. Additionally.

and the inverse of the previous stiffness matrix.5. [K0]-1. the norm of [K] is larger than that of [∆K]. is close to the identity matrix. The modification is generally given by sensitivity information of interest in a design optimization. [ K 0 ] −1[ K ] . Iterative methods use successive approximation to obtain an accurate solution in a structural reanalysis. that is. In every iteration of an optimization procedure.3. and the modified solution can be found in a few iterations.1 Combined Iterative (CI) Method with SMI In practice. an iterative procedure to find the modified response can be started by using the previous response as an initial iterative solution. In those cases. the design of an engineering structure can involve the modification of the entire structure. [I]. because the reanalysis cost of using SMI is fixed by the modified rank ratio in a coefficient matrix. the given modification is less than the previous system. as a preconditioner to the modified linear system to speed up the solution convergence as follows: [ K 0 ] −1 [ K ]{d } = [ K 0 ] −1{ f } (5. In most reanalyses of design optimization procedures. such as the Conjugate Gradient (CG) type methods (BiCGSTAB [51] and GMRES [52]).1) When the modification is very small.3. Popular iterative techniques for the structural reanalysis problem include preconditioned iterative Krylov-type methods. the sensitivity information 103 . For an overall modification with a small degree of change. iterative methods can be applied more efficiently than the SMI method. the preconditioned system.

10 Separating [∆K] Into the Parts for SMI and an Iterative Method In the iterative procedure. It is obvious that the major contributing design variables have a large influence on the numerical properties of the iterative procedure. as shown in Fig. The part of [∆K] to which SMI is applied is denoted as [∆K]smi and the rest is denoted as [∆K]iter.for objective and constraint functions with respect to design variables is changed to improve the current design.10. Among the defined design variables of a design optimization. which is handled by an iterative procedure. To maximize the efficiency of SMI when improving 104 . The basic idea is that when an arbitrary [∆K] is given. Primary contributing part of dK to the numerical conditions for an iterative procedure SMI Iterative method = + [∆K] [∆K]smi [∆K]iter Figure 5. typically there are major and minor contributing variables which impose large and small modifications on the current structural design based on the sensitivity information. we separate the major contributing part to the numerical condition from [∆K] and apply SMI to the major part and an iterative method to the rest of [∆K]. the SMI method can be efficiently employed to improve the numerical properties for better convergence with minimum computational cost. 5.

which have the largest diagonal elements.3. After obtaining the IVS matrix for [∆K]smi.3. which are related to the spectral radius. since the SMI method is also valid for a non-symmetric matrix.3) i =1 After obtaining the iterative solution with [∆K]iter first. the modified response can be transformed by the [P] matrix. the numerical properties are improved by applying SMI for the columns of [B]. should be eliminated.2) where [M]smi is a preconditioner augmented with the SMI method as m 1 [ M ]−smi = U [ P][ K 0 ]−1 (5. BiCGSTAB which combines BiCG with repeated GMRES is employed for the preconditioned linear system in this work.the numerical condition of an iterative process. the extremal eigenvalues of the preconditioned system. 1 [ M ]−smi [ K ] . which is computed for [∆K]smi from SMI as follows: m d = U [ P]d iter (5.3. In this work. This is because CG tends to eliminate components of the error in the 105 .4) k =1 Generally. the preconditioned linear system for the reanalysis is given as follows: 1 1 [ M ] −smi [ K ]{d } = [ M ]−smi {f} (5. The convergence behavior of the CG-like methods is known to depend on the distribution of the extremal eigenvalues of the matrix.

the iteration method can start with improved numerical properties of the system and show better performance. which is the function of the extremal eigenvalues. Moreover. As described previously.2 Binomial Series Iterative (BSI) Method A new iterative method. is developed in this work as an efficient and robust iterative reanalysis method.5). This means that the spectral radius of the linear system is usually less than unity.3. which is selected from [∆K] to improve the numerical condition of the iterative system. The partial matrix that is accounted for by the ILU decomposition is accepted as an initial matrix. 5. The BSI method is developed to compute the binomial series expansion efficiently as in Eq.direction of eigenvectors associated with extremal eigenvalues successively. the modification is usually smaller than the previous design. the remaining part that is not addressed by the ILU decomposition is assumed as a given modification matrix.3. the Binomial Series Iterative (BSI) method. which is developed based on the binomial series expansion. In those cases. In an optimization procedure of an engineering structural design. And. 106 . By using the augmented preconditioner with SMI in a system reanalysis. the SMI method can also be applied to an initial complete analysis with any preconditioning iterative technique. (5. becomes smaller in each iteration. The fast convergence rate can be obtained as the condition number of the linear system. the binomial series solution always converges. [∆K]. the SMI method can be used for a certain [∆K]smi. such as Incomplete LU (ILU) decomposition.

it might be tedious and computationally expensive to find a converged solution from the iterative procedure.5) where {d}1 is the previous response vector. which requires a matrix and vector multiplication in every iteration.5) is increased. it is shown that an element of the d vector is converged to its true value as the iteration number of Eq. {d0}. The B matrix can be replaced by [B]iter if the SMI method is applied to improve the numerical condition before the iterative procedure.3.{d }i +1 = {d }1 + [ B ]{d }i (5. 5. (5. Nonlinear recursive part Constant recursive part Response dc 1 dc 2 Iteration Iteration a) Nonlinear and constant recursive terms b) Successive predicting process Figure 5.3.11a.11 Successive Predicting Process of the BSI Method In Fig. In special cases in the binomial series expansion. However. a constant recursive vector for a response vector {rdc} can 107 .

be found, and the converged solution {dc} can be computed with the following simple
equation:

{d c } = {d 1}. /(1 − {rd c })

(5.3.6)

Unfortunately, the recursive vector usually is not a constant vector, and Eq.
(5.3.6) cannot be used directly. It is found that when the series has a converged solution,
the recursive term in each element of the iterative solution is also converged to a constant
after showing nonlinear behavior for some number of iterations. Hence, as shown in Fig.
5.11a, the iteration history is divided into a nonlinear recursive part and a constant
recursive part. The converged solution can be obtained efficiently by reducing the
computational cost for the constant recursive part as follows:

{dc} = {d 1} +

n−2
i =1

{sd i } + {sd n−1 }. /(1 − {rd n − 2 })

(5.3.7)

where n is the number of iterations for finding the converged recursive vector, and the
two vectors, {sd} and {rd}, are defined as follows:

{sd i } = {d i +1} − {d i }

(5.3.8)

{rd i } = {sd i +1} /{sd i }

(5.3.9)

108

The second and the third terms in the right hand side of Eq. (5.3.7) denote the
nonlinear recursive part and the constant recursive part, respectively. To accelerate this
procedure in finding an acceptable solution, the converged solution is predicted
successively with an approximated recursive vector using Eq. (5.3.7). The approximated
recursive vector is obtained with a minimum number of iterations (m), which is at least
three, and gives better accuracy than previous solutions without verifying the
convergence of the recursive vector. If the predicted solution is not satisfactory, the
predicting procedure is repeated by using the current predicted solution as an initial
vector, as shown in Fig. 5.11b. Hence, there is a main loop in the BSI method for
computing the predicted converged solution and an inner loop for obtaining the
approximated recursive vector.

To simplify the computations in the jth predicting procedure, the computations of
Eq. (5.3.5) in the inner loop are rewritten as follows:

{d j ,i +1} = {d j ,i } + {r j ,i }

(5.3.10)

where {rj,i} is the residual vector, {r j ,i } = {d 0 } − ([ I ] − [ B ]){d j ,i } = [ B ]{r j ,i −1} .

The

recursive vector in the inner loop is obtained in terms of the residuals as

{rd j ,i − 2 } = {r j ,i } /{r j ,i −1}

(5.3.11)

109

From Eqs. (5.3.10) and (5.3.11), the j+1th predicted solution with the minimum iteration
number, m, is given as

{dc j ) = {d j , m −1 } + {r j ,m −1 } /(1 − {rd j ,m − 2 })

(5.3.12)

BSI procedure is shown in Fig. 5.12. The convergence is checked directly by
computing the norm of the residual vector.

{d1,1 } = {d 0 }, {r1,1 } = {r0 }

j = j + 1, i = 1
{d j , i +1 } = {d j , i } + {r j , i }
Yes

{d j +1,1 } = {dc j }

{r j , i +1 } Converged?

{r j +1,1 } = {d 0 } − ([ I ] − [ B]){dc j }

No

{r j , i +1 } = [ B]{r j , i }

{rd j , i −2 } = {r j , i } /{r j , i −1 }
dc j = {d j , i −1 } = {r j , i −1 } /(1 − {rd j , i − 2 })
No

{dc j } Improved?

Yes

{d } = {d j , i +1 }

Figure 5.12 BSI Method Flowchart
110

The minimum number of iterations for an approximated recursive vector in the jth
prediction procedure can be determined by checking the residual with {dcj}. This
checking process can be performed approximately by comparing several selective
elements in the residual vectors from {dj,i} and {dcj}, instead of a full matrix by vector
computation for the norm of the residual with {dcj}. So, in every inner loop, one matrix
by vector multiplication and two vector by vector multiplications are required. Since the
BSI method does not build up orthogonal basis vectors, any possible breakdown or
stagnation, which is possible in Krylov subspace methods, can be avoided. The BSI
method, which has a stationary iterative procedure in the inner loop, can be used in both
symmetric and non-symmetric cases and shows stable convergence behavior. As
described in the previous section, the SMI method can be used to improve the numerical
properties of a linear system for the BSI method.

5.3.3 Numerical Examples

In this section, two examples of engineering structure reanalysis are presented to
demonstrate the efficiency and accuracy of the proposed methods.

5.3.3.1 Plane Truss

The space truss shown in Fig. 5.6 is presented again. After obtaining the sensitivity
information from the previous example in section 5.2.3, a one-dimensional search is
111

usually performed in a design optimization. 5.∗ . which involves structural simulations for the maximum displacement. Suppose that the negative of the gradient information (Eq.[B] .2.+ .3. Norm of residual -+-∗-o-∆- BSI BSI+SMI(1) BiCGSTAB BiCGSTAB+SMI(1) Eigenvalue Iteration Order .[B] improved by SMI a) Iterative solution history b) Eigenvalue distribution Figure 5. The one-dimensional search is performed by changing the value of a step size α in the following equation: {xi +1 } = {xi } + α {S } (5. the constraint function.13 Iterative Result and the Improved Eigenvalue Distribution 112 .13) where {xi} is the current design and {S} is the search direction. For every value of α. is evaluated repeatedly to find an appropriate α value in most optimization algorithms.34) is the feasible and usable search direction at the current stage.

The design variables for the major search direction makes a major effect on the numerical condition of the linear system. As shown in Fig. BiCGSTAB and BSI. In the search direction vector. 5. the SMI method might not be so effective at reducing the total computational cost. It makes sense to use the information from the analysis of the previous structural system for the modified structural analysis using the iterative methods.13a shows the iterative results of each method for α=2.0 in Eq.Since the one-dimensional search usually requires overall modification in the stiffness coefficient matrix. 5. However. the effect of the high spectral radius of the B matrix is suppressed by the repeated prediction using the minimum number of inner loops in the BSI method. and the modification is usually smaller than the current structural design. This is because when a small number of extremal eigenvalues is more than unity. Even though the series solution should theoretically diverge. the BSI method gives a converged solution. The SMI method is 113 . the performance of iterative methods improved significantly. Fig.13b of the eigenvalue distributions of the B matrix with the SMI method. (5. the structural system is modified based on the previous structure. the extremal eigenvalues are eliminated by using the SMI method so that the numerical properties of the matrix are improved for the iterative methods. However. when combined with the SMI method as a Combined Iterative (CI) method.3. This explains the fast convergence in iteration methods with the augmented preconditioner.13). The spectral radius of the B matrix in this numerical example is more than unity. there are major and minor direction elements.

3. t6 t3 t5 t2 t4 T(x. 5. 5.3. 5.14 Intermediate Complexity Wing Structure Model and Design Variables 114 .14 is a representative wingbox structure of a fighter aircraft..13.conducted with the cost of only one matrix by vector multiplication in this example and it is indicated with “SMI(1)” in the Fig.y) t1 a) Shape parameters (ti) of the wing thickness c) Interpolated skin thickness Figure 5.2 Intermediate Complexity Wing (ICW) The metallic structural model of the ICW shown in Fig.

And.g. Wi. NDV is the number of design variables. y ) where x and y indicate the rectangular coordinate system on the wing skin. as shown in Fig.14b by applying a weighting function to the shape parameters as T ( x. determines the contribution of ti to the thickness at the location of interest. and Wi is the weighting function Wi ( x.16) where h is the distance between the location of a design variable and the current location and γ is a nonlinear index for the blending function (e.3.14a.0). y ) = NDV i =1 (5.3. ti is the ith design variable. is obtained as shown in Fig.14) t i Wi ( x. 5. which is symmetric between the upper and lower skins on the wing. γ=2.The tip displacement of the wing structure is considered as a target response. The design variables are the shape parameters of the wing skin thicknesses.15) φk The weighting function.3. The skin thickness of an arbitrary location (x. the blending function φ is the inverse of the distance between the locations of interest and the shape parameters as follows: 1 φi = hi γ (5. Unlike the previous 115 . y ) = φi NDV k =1 (5. y). 5.

For a one-dimensional search with the following current design and direction vectors. approximated responses can be computed by employing iterative methods for the sensitivity analysis.15 Iterative Solution History of CI Method 116 (5. which have enough accuracy for FDM.3.3675 -0.0000 3.7500 -+-∗-o-∆- 1.5000 1. the results with α=0.7 from different iterative methods are shown in Fig. which is an exact method.1250]×10-2 3. It is obvious that the additional solutions. can be obtained in a few iterations through preconditioned iterative procedures with information from the initial analysis.1837 0. in which one design variable causes changes to all of the skin elements on the wing. In this case.3. might not be cost efficient in sensitivity analysis using FDM.0000 {S}T=[2.0735 0. 5. {x}T=[3.9400 -0.0000 BSI BSI+SMI(1) BiCGSTAB BiCGSTAB+SMI(1) Iteration Figure 5.15.18) . the SMI method.17) (5.0735]×10-2 Residual Normal 0. because usually a small deviation of each design variable is imposed in FDM.5512 -0.plane truss example.

Again. the extremal eigenvalues are eliminated.[B] improved by SMI Order Figure 5.16 Improved Eigenvalue Distribution During Reanalysis Using CI method 117 .[B] Order Eigenvalue . 5. Eigenvalue . After applying SMI to the B matrix. Fig.+ .16 shows the distributions of the eigenvalues of the B matrix. and the band of eigenvalues becomes small. the performance of iterative methods combined with the SMI method is improved for each iterative method.∗ . The SMI method is applied to the B matrix for the cost of only one matrix by vector computation.

Even though the BSI method alone is not competitive to other CG-like methods. the BiCGSTAB method. Also. Additionally. a new iterative technique. However. the SMI method for minor modification in structures. the number of iterations of the BSI method is significantly reduced to obtain an acceptable solution. In the CI method. by enhancing the numerical condition with SMI. In this chapter. is also developed by using the same technical concept as SMI. which is useful in gradient calculation. the BSI method. 5. whose convergence rate mainly depends on the separation of extremal eigenvalues of the B matrix. Other iterative methods that are not mentioned in this work can also take advantage of the SMI method. the iterative behavior of the BSI method is usually more stable than the CG-like methods because there is no computation that might involve a numerical instability. the CI method is developed by coupling a direct matrix method (SMI) with any iterative method.15. is presented. 118 . As shown in Fig. is less sensitive to the value of the spectral radius than the BSI method and obtains a relatively small benefit from the SMI method. the numerical conditions for a converged iterative solution are successfully improved by SMI.

those Belief and Plausibility functions could be evaluated by simulating the target system for the limit-state function. is defined by the Cartesian product of the combined BBA structures. Cost-Efficient Evidence Theory Algorithm For multiple uncertain parameters in a structural system. After generating a desired sample population from the assumed PDF. 29. The popular methods for computing those minimum and maximum are the sampling method [7. 40] and the vertex method [39]. By using the vertex method. In the sampling method. then the sampling method gives a robust result. the simplest way is to assume a uniform PDF for each possible event. The Belief and Plausibility functions are calculated by comparing the range of system responses with the limit-state value. 28. 119 . such as the sensitivity analysis in evidence theory [13]. If the population is large enough. the evaluations of the Belief and Plausibility functions are simplified and the computational cost is reduced. However. which is similar to the joint probability density function in probability theory.6. it requires extensive computational effort for repetitive simulations with FEA or CFD codes and it could be inappropriate for other engineering design problems. a joint BBA structure. in which only the structural simulations of vertices of each possible event are required.

With the assumption that the limit-state function is monotonic. the proposed algorithm identifies the failure region in a defined UQ space by employing a mathematical optimization technique. the failure region is usually small. and then an approximation approach is adopted to construct a surrogate of the original limit-state function for the repetitive simulations of UQ analysis. such as displacement. if the limit-state function is nonlinear and non-monotonic. 120 . the system failures can be defined by non-monotonic limit-state functions. However. the response sets of joint events of a limit-state function can be inaccurate and the prediction of uncertainty can fail. fundamental frequency. However. the vertex method could be useful to quantify the uncertainty. and a large amount of computational resources is wasted on the non-contributive region to the resulting uncertainty. can be monotonic with given uncertain parameters in some cases. Even though structural system responses. and so on. First. in many engineering structural UQ analyses. buckling load. stress. the motivation of this section is to develop a costeffective algorithm by using a surrogate model approach to reduce the overall computational cost and by focusing the computational resources only on the failure region. Therefore.

The weighting functions in Eq. the Multi-Point Approximation (MPA) method [21] is employed. (6.2) φi ( X ) where φ i ( X ) is a blending function. X is the vector of uncertain variables. The weighting function can be expressed as follows: wi ( X ) = φi ( X ) N i =1 (6.1. and wi ( X ) is a weighting function that determines the contributions of each local approximation function.6. It is assumed that the information at the sampled points is accurate. The general formulation of MPA is given as follows: ~ F(X ) = N i =1 ~ wi ( X ) Fi ( X ) (6.1 Multi-Point Approximation In this work. and in this work the blending function is given by: 121 .2) are constructed to reproduce the exact function value and gradient values at the points where the local approximations were built.1) where N is the number of local approximations.1. ~ Fi ( X ) is a local approximation of an original limit-state function.1. There are several possible blending functions.

1. when a current target is far from a sampling point of a particular local approximation. in many cases of UQ analysis of engineering structural systems.φi ( X ) = 1 hi (6. with the given imprecise information. hence the choice of local approximation is important. the failure region is small compared to the entire space of the joint frame of discernment. 6. The details for evaluating the weight function and the blending function can be found in Ref. is employed as a local approximation method. The main computational cost of UQ analysis is from the large number of structural model simulations needed to explore the entire joint frame of discernment. [21]. defined by the Cartesian product of the frame of discernments of uncertain parameters. the contribution of that local approximation is minimal. 122 . Physically.2 Cost Efficient Algorithm for Structural Uncertainty Quantification When conducting the UQ analysis using the sampling method or the vertex method.3) where hi is basically the distance between a current target point and the sampled points that are used for constructing local approximations. developed by Wang and Grandhi [22]. The efficiency and accuracy of this method was extensively demonstrated in many engineering disciplines [21-26]. However. it is required to explore the entire joint frame of discernment. the Two-Point Adaptive Nonlinear Approximation (TANA2) method. The accuracy of MPA mainly depends on the local approximation. In this work.

a surrogate of the original limit-state function constructed by using the MPA method can be used instead of the repetitive simulations to reduce the computational cost in UQ analysis. it is assumed that the failure region is comparatively small in the defined joint frame of discernment.Hence. the computational effort of structural simulations could be allocated efficiently by identifying the failure region. 123 . For the first step. Also. Failure region F Failure boundary point Safe region Initial point x1 x2 Figure 6. constructing a surrogate of the original limit-state function using the MPA method. The proposed algorithm consists of two main steps: i). instead of investigating over the entire space for a limit-state function.1 Identifying the Failure Region Using an Optimization Technique This failure region could be identified by solving an optimization problem. The problem can be formulated as follows. finding the failure region in a defined joint frame of discernment and ii).

and YLimit is the limit-state value of a system response. X i is the design vector of uncertain parameters at the ith iteration. Surrogate model ~ F(X ) = N i =1 F x1 x2 ~ wi ( X ) Fi ( X ) x1 Approximation Point (AP) Constructed approximation Figure 6.1 with arbitrary limitstate function and value. In this work. is applied. a gradient-based optimization technique. a number of techniques are available [41].1) subject to : X L ≤ X i ≤ X U (6.2 Deploying Aps and Constructing the Surrogate on the Failure Region The cost for this optimization procedure can be reduced by relaxing the convergence criteria. because the exact optimum point (or the exact failure boundary 124 x2 .2) where X L and X U indicate the lower and upper bounds of each parameter from the frame of discernment.minimize : YLimit − f ( X i ) (6. Identifying the failure region by an optimization technique is illustrated in Fig 6. To solve this optimization problem. the Sequential Quadratic Programming method.2.2.

To confirm the MPA accuracy. Since the uncertain parameters in a joint proposition are continuous in an engineering application. After finding the multiple failure regions. which is a Design Of Experiments (DOE) technique [47-49]. After obtaining the failure boundary point. The deployment of APs can be performed with a factorial design.4).3. the procedure of identifying the failure region can be performed with multiple initial points to find the multiple failure boundary points. the exact simulation values and the approximation values are obtained and compared at several intermediate sampling points.point) is not required. For a special case in which multiple failure regions (e.3.g. If the MPA accuracy is not acceptable. the degree of plausibility and the degree of belief. Only the approximate optimum. ck. are calculated by Eqs. the MPA is constructed over the failure regions as previously described. (4. it is numerically required in the evaluation of Belief and Plausibility functions to find the maximum and minimum responses in each joint proposition. is needed in this step. as shown in Fig 6. multiple most probable failure points in the probabilistic context) are expected. Until the desired accuracy of MPA is obtained. additional APs are distributed with small variations of factorial design and the local approximations are updated. the two measurements of evidence theory.3) and (4. Once the surrogate from the proposed algorithm is obtained. Approximation Points (APs) for constructing local approximations are deployed over the failure region.2. The first APs are deployed with large variations of factorial design and TANA2s are constructed between the neighboring points. 125 . which is close to the boundary of the failure region. this procedure is repeated.

3 The Cost-Efficient Algorithm for Assessing Bel and Pl 126 .2. such as FEA or CFD.[ymax.3) The maximum and minimum responses are obtained with trivial computational cost by using the surrogate model constructed by the proposed algorithm because the surrogate is just a closed form equation and it replaces computationally intensive simulations. Pl] Figure 6. Constructing a Surrogate Model Given information Combining information Defining function evaluation space ( JFD) Identifying the failure region boundary Initial factorial design for TANAs Constructing MPA Is the accuracy of MPA acceptable? FEM Analyzer No Reconstructing factorial design (Decreasing variation) Yes No Is the whole failure region covered? Reconstructing factorial design (Expanding design space) Yes Evaluating Bel & Pl functions Surrogate model (MPA) [Bel. max [ f(ck)] ] (6. ymin] = [ min [ f(ck)].

In the calculation of the Belief and Plausibility functions, joint propositions in the
failure region are evaluated as to whether the response range of the joint proposition is
included in the UF set partially or entirely, instead of by obtaining the XF set. For the
summary of the proposed cost effective algorithm, Fig. 6.3 shows the procedure of UQ
analysis using evidence theory.

6.3 Numerical Examples

6.3.1 Composite Cantilever Beam

A composite cantilever beam with a point load is considered, as shown in Fig. 6.4.
To simplify the calculation of tip displacement of the composite beam, a symmetric
laminated beam is used with one composite material and [±45]s angle plies.

b

F0

(45°)

h

(-45°)
(-45°)
(45°)

L

Figure 6.4 Composite Cantilever Beam Structure Model

127

The tip displacement is obtained by the classical laminated plate theory [63] in
terms of composite material properties as follows:

δ Tip =

Fo L3 E L − 4 G LT ETν LT + E L ( ET + 4G LT + 2 ET ν LT )
E L G LT ( E L + ET + 2 ET ν LT )
2h 3
2

2

(6.3.1)

where h, L, and F0 are the height (3.81 cm), length (50.8 cm) of the beam, and the applied
load per width (350 kN) respectively.

For the composite material (graphite fabric-carbon matrix), EL and ET are the
longitudinal and transverse Young’s moduli (173 GPa and 33.1 GPa), GLT is the shear
modulus (9.38 GPa), and νLT is the Poisson’s ratio (0.036). In this example, the Young’s
moduli, EL and ET, are considered as uncertain variables, and the goal is to obtain the
assessment of the likelihood that the tip displacement exceeds the limit-state value of
5.59 cm.

U F = {δ Tip : δ Tip ≥ 5.59 cm}

(6.3.2)

Due to the lack of data and knowledge, only the multiple interval information for
the scales (α and β) of the Young’s moduli (EL and ET) is available, as shown in Fig. 6.5.
The interval information for the uncertain variables, EL and ET, are taken as BBA
structures without imposing any additional assumptions on the intervals. The BBAs of
intervals may not be continuous and they could overlap. The possible values of the
128

uncertain Young’s moduli are obtained by multiplying the scale factors to the previously
given material properties.

Scale factor (α) information for EL
0.500

0.625

α1

0.750

α2

0.875

α3

1.000

1.250

α5

1.125

α7

α3

0.375

1.500

α8

α6

α9

α4

BBA

α1

α2

α3

α4

α5

α6

α7

α8

α9

0.0086

0.0086

0.0240

0.0103

0.2243

0.4966

0.0993

0.0514

0.0771

Scale factor (β ) information for ET
0.500

0.687

β1

0.875

1.062

β2

1.437

β3

β4

2.0

β7

β5

BBA

1.625

β6

β1

β2

β3

β4

β5

β6

β7

0.0075

0.0075

0.0226

0.3158

0.5263

0.0902

0.0301

Figure 6.5 Scale factors (α, β) Information for EL and ET

In this example, it is expected that the vertex method will not fail to calculate the
plausibility and belief, because the limit-state function is monotonic with respect to
uncertain variables, as shown in Fig. 6.6.

129

Surrogate failure region

δTip (cm)

β

α

Figure 6.6 Tip Displacement (δTip) of the Composite Cantilever Beam with Respect to
the Scale Factors (α and β) and the Surrogate Failure Region Using the
Proposed Method

The vertex method requires 72 original function evaluations to check the vertices
of the joint BBA structure. However, by using the proposed method, the number of total
function evaluations required for identifying the failure region boundary and for
constructing MPA is only 24. The computational cost saved is about 67% and the same
UQ analysis results as the vertex method are obtained, as shown in Table 6.1. The
computational savings garnered by the proposed method mainly depends on the ratio of
failure region to the entire joint frame of discernment; that is, the smaller the ratio
becomes, the lower the computational cost.

130

U F = {δ Tip : δ Tip ≥ 20.1 Composite Cantilever Beam Results Using the Vertex and Proposed Methods Bel Pl Number of function evaluations Vertex method 1.2875×10-4 0.3. 6. whereas there is at least 1.2 Intermediate Complexity Wing (ICW) The structural model of an intermediate complexity aircraft wing is shown in Fig.01 with the given imprecise information.3). the degree of plausibility is 0.3 cm} (6.2875×10-4 belief for the failure.2875×10-4 0. the relative tip displacement at the marked point is restricted to less than 20. Belief and Plausibility can be accepted as lower and upper bounds of an unspecified probability density function for the given interval information.1.2875×10-4 and as high as 0.0100 24 In this example from Table 6. a probability for U F can be as low as 1. 6.Table 6. Thus.3) 131 .7.01 for the failure of the composite cantilever beam regarding the defined tip displacement limit-state. In this model.0100 72 Proposed method 1.3.3. (6. and the system failure set is defined by Eq.3 cm as a limit-state function.

crack propagation. and ply angles of the composite elements.Tip displacement Upper wing skin Span (cm) Spars and Ribs Lower wing skin Uncertain Young’s moduli region Wing Root Chord (cm) Figure 6. and so on. as indicated in Fig. in order to represent the structural integration defects that can reduce the structural stiffness from fatigue.8. 132 . The uncertainties of the Young’s moduli and ply angles are considered only in the root region. Young’s moduli. 6.7 ICW Structure with Uncertainties in the Root Region The uncertainties are assumed to exist in the static loads.

β ) as follows: E = α × original Young’s modulus (6.8 at 0. Two aerodynamic pressure distributions are obtained by the steady aeroelastic trim analyses (roll and lift) with an aerodynamic model of ICW.3.8 Aerodynamic Model of ICW The actual values of the Young’s moduli and ply angles are obtained by uncertain factors ( α .7 Mach using ASTROS [42]. shown in Fig. 133 .4) θ = β × original ply angle (6. different aerodynamic pressure distributions are imposed on the wing model.5) Due to various operational conditions. 6.3.Span (cm) Chord (cm) Figure 6.

9 Aerodynamic Pressure (Cp_lift) Distributions from Steady Aeroelastic Trim Analysis of Lift Forces Cp (Kpa) Chord (cm) Span (cm) Figure 6.Cp (Kpa) Span (cm) Chord (cm) Figure 6.10 Aerodynamic Pressure (Cp_roll) Distributions from Steady Aeroelastic Rolling Trim Analysis 134 .

6.The aerodynamic pressure distribution of rolling trim analysis (Cp_roll) is obtained from the rolling rate 1.11 and 6. β . The interval information from two experts is aggregated by Dempster’s rule of combining [Eq. It is assumed that only imprecise information is available because of lack of data.3.0 (rad/sec).6): Cp = γ 1 .9 and 6. and the aerodynamic pressure distribution of lifting analysis (Cp_lift) is for the angle of attack 5°. Multiple intervals of imprecise information for each variable are given by two independent experts. as given by Eq.5 C p _ roll + (1 − γ ) C p _ lift 1 . Therefore. there are three uncertain scale variables ( α . 135 . as shown in Figs. In this example.3. (6.6) where γ is the uncertain combination factor in this example. as shown in Figs. After obtaining the combined aerodynamic pressure distribution on the aerodynamic model.5 (6.1)] to obtain the combined BBA structure of each uncertain variable. 6. the structural static loads along the surface nodes are obtained by the equivalent force transfer method integrated with the spline transformation technique [42].4. (3. the static loads on the structural model are assumed to be independent of material properties and they are obtained by the combination of the aerodynamic pressure distributions.12. and γ ).10.

050 γ (Combination factor of aerodynamic pressures) 0.050 α8 0.050 β5 β6 0.α (Scale factor for Young’s moduli) 0.700 0.100 0.80 0.150 1.90 γ1 1.00 α4 α1 α2 0.40 β5 1.50 β6 β3 β4 β1 BBA β2 0.025 .80 0.50 0.005 γ2 0.70 α1 0.50 0.200 γ7 0.70 0.90 α2 1.00 γ3 γ1 0.050 0.10 1.500 γ6 0. β.010 0.70 0.150 β3 β4 0.50 0.30 1. and γ) from the First Expert 136 γ8 0.030 β (Scale factor for ply angles) 0.60 0.20 1.90 β1 1.050 Figure 6.11 Interval Information for Uncertain Variables (α.020 1.20 α5 α3 BBA 1.100 0.120 α6 α7 0.020 1.60 0.00 1.60 0.10 1.30 γ6 γ4 γ3 0.50 γ8 γ7 γ5 0.10 β2 1.40 1.550 0.40 α7 α6 α3 α8 α4 0.040 γ4 0.20 γ5 γ2 BBA 1.020 1.50 α5 0.30 1.80 0.

α (Scale factor for Young’s moduli) 0.40 α5 α3 0.150 β4 0.50 γ5 γ5 0.60 0.00 β1 1.70 0.700 γ4 0.20 α4 0.00 γ2 γ1 0.400 β3 0.030 β (Scale factor for ply angles) 0.70 γ1 BBA 0.30 1.10 α4 1.90 1.100 0.050 β5 0.20 1.050 1.60 0.070 1.300 γ (Combination factor of aerodynamic pressures) 0.10 γ3 γ2 0.90 1.400 α6 α5 0.40 γ4 γ3 0.00 α2 α1 BBA 0.30 1.30 1. β.50 0.100 1.070 0.80 0.030 1. and γ) from the Second Expert 137 .050 1.20 1.100 Figure 6.90 α3 α2 0.80 0.12 Interval Information for Uncertain Variables (α.60 0.50 α6 0.50 0.400 1.40 β3 β2 BBA 1.50 β5 β4 β1 β2 0.10 1.80 α1 1.70 0.50 0.

Table 6. The original function evaluation number is significantly reduced to 1631 by using the proposed method.200417 plausibility values to face the failure of the wing structure regarding the tip displacement limit-state function.As a result. However.006491 belief and 0. compared to 9504 by using the vertex method (around 80% computational cost savings). the result could be merely the reflection 138 . or even a single value result can be calculated by employing additional assumptions.000 Vertex method 0. as does the sampling method.2 ICW Results Using the Sampling.200417 150. because the nonlinearity and the non-monotonicity are captured by the surrogate model.006491 0. From the three methods.000 to 9504.006491 0. However.000 simulations and the vertex method.006500 0. the proposed method gives robust results. it should be remembered that without justifying the assumptions with evidence or data.156408 9504 Proposed method 0. Vertex. The gap of the bound can be reduced. the results of UQ analysis using evidence theory show that there are 0.2 shows the UQ analysis result of the proposed method by comparing it to the results from the sampling method containing 150.200417 1631 Even though the vertex method reduces the number of simulations of the limitstate function from 150. it fails to calculate the correct degrees of belief and plausibility due to the non-monotonicity of the limit-state function for the ICW structure. and Proposed Methods Bel Pl Number of function evaluations Sampling method 0. Table 6.

006491. there is no additional high computational cost in evidence theory for updating the bound result with a reinforced expert opinion or refined interval information. once the surrogate is constructed. Hence. the benefits of the proposed algorithm can be realized in other analyses using evidence theory.11). Moreover.200417]) using evidence theory can be viewed as a robust result because it is obtained without any additional assumption and it includes all the probability results that could be obtained by using different assumptions to the given imprecise information in probability theory. such as sensitivity analysis.of the assumptions. In the local approximation TANA2. 0. In this example. Unlike the vertex method or sampling method.1. reliability-based optimization. 139 . (5. the surrogate of the limit-state function is constructed for the tip displacement limit-state function using MPA with three variables. as shown in Eq. and so on. the bound result ([0. the addition of one more uncertain variable needs only one function gradient with respect to the additional variable and one error correction term. TANA2 can handle a large number of uncertain variables efficiently. since the limit-state in UQ analysis is expressed by a single closed-form equation using MPA.

Comparison of Reliability Approaches With Imprecise Information Until now.1. (7.7. but also aleatory uncertainty can be tackled in its framework without any baseless assumptions. Uncertainty Quantification (UQ) has been performed by treating them separately. because of the flexibility of the basic axioms in evidence theory. 7.1): Y = f (X ) (7. when both aleatory and epistemic uncertainties are present together in a system. or by making assumptions to accommodate either a probabilistic framework or a possibilistic framework.1) 140 . In this section. the possibility of adopting evidence theory as a general tool of UQ in an engineering structural system is investigated with the cost-efficient UQ methodology that was introduced in the previous chapter.1 Problem Definition with Imprecise Information The form of the mathematical model that describes the physical system can be expressed abstractly as Eq.1. not only epistemic uncertainty. However.

where Y =[y1. the uncertain information consists of a probability density p on all finite elementary events of S. Once enough data for those parameters of X are obtained. system failure modes. in possibility theory. … .1. yn] is a vector of system responses and X=[x1. such that p: S p( s) = 1 [0. … . xn] is a vector of input data. for the given bound information. the universal set of events. With different levels of degree of membership (α cuts). that is. in case the imprecise information is given to any subset of S. When available data is not sufficient to construct a PDF. the probability information for each elementary event should be reproduced by using any assumption for the probability mass distribution in the subset. only parametric uncertainty is considered. y2. Since the fuzzy set 141 . x2.1] and (7. the parametric uncertainties in X can be expressed by PDFs and probabilistic UQ techniques can be used. a membership function is defined to represent the degree of belonging or not belonging to the leveled interval (membership) by taking the uncertain variable as a fuzzy variable. On the other hand. fuzzy subsets of the fuzzy variable are obtained. the Bayesian method can be used in probability theory under the assumption that the imprecise information is given to events which are mutually exclusive and exhaustive [64]. there is no uncertainty in the defined mathematical modeling. upper and lower bounds might be provided from experts’ opinions. that is.2) s∈S Hence. the uncertainty of Y is determined from the uncertainty of X in the model. When only parametric uncertainty is considered. For the imprecise bound information (epistemic uncertainty) of an uncertain parameter. In this work. and so on.

denoted by . In evidence theory. Pl and probability) eventually will converge to a single value when the information is increased sufficiently. For multiple uncertain parameters. However. 142 . The measurements (Bel. unlike a PDF of probability theory and a membership function of possibility theory. and the bounded result includes the probability result. is defined for UQ analysis of a structural system. as shown in Eqs.is originally developed with the contention that meaning in natural language is a matter of degree [65]. is constructed by using the Cartesian product of the propositions of each uncertain parameter. which can be obtained by assuming any distribution for the given interval information. When the imprecise information is given by multiple non-consonant intervals with corresponding degrees of belief. The subsets (intervals of an uncertain variable) to which the bodies of information (BBAs) are assigned can be consonant or non-consonant and continuous or discrete. which is similar to the joint probability density function in probability theory. The possible joint set. As mentioned previously. The interval can be the interval of physical value or the interval of imprecise statistics. the joint BBA structure. imprecise information expressed by any subset of FD is assigned to a BBA structure without any additional assumption. the fuzzy subsets are consonant sets with corresponding α cuts. the BBA structure in evidence theory cannot be expressed with an explicit function. Pl]) due to lack of information. the fuzzy membership function should be approximated to solve with possibility theory [66]. evidence theory gives a bounded result ([Bel.

1 Three Bar Truss The displacement of node 4 is considered as a limit-state response function.06 psi P (40000lb.3) and (4.3. by finding the maximum and minimum responses using the proposed cost-efficient algorithm in the previous chapter. The joint BBA structure must follow the three axioms of BBA structure. The nominal values for the uncertain parameters are fixed and the 143 . It is assumed that uncertainties exist in the independent parameters of elastic modulus (E) and applied force (P).1. (4. There are three truss elements and a static load is applied at node 4. [Eqs. 7.4)]. Every possible event is required to be checked in the evaluation of the Belief and Plausibility functions. -40000lb) Figure 7. 7.1) and (4.2).2.2.0 [67].(4.3. The finite element analysis (FEA) of this structure was performed using GENESIS 6. 10″ 10″ 10″ 1 2 A1 A2 3 A1 4 Material:E=1.2 Case Study I: Three Bar Truss The structural model of a three bar truss is shown in Fig.

as shown in Fig.02 Figure 7. we consider the situation in which an expert gives multiple interval information for the two uncertain parameters.actual values are obtained by multiplying the nominal values by uncertain factors. 144 . that is.07 1.0 1.1 1. possibility theory.10 Force factor 0.7 0. 7.2.5 p5 p6 0.3 1.0″).2.10 0.05 0.1).6 0.5 0.9 p1 p2 0.60 0. δ fail = {δ Node _ 4 : δ Node _ 4 ≥ δ lim it } (7.70 1.2 1.1) Elastic modulus factor 0. The goal of this problem is to obtain an assessment of the likelihood that the displacement of node 4 is larger than the limit-state value (δlimit = 3.2 Imprecise Information for the Scale Factors of Uncertain Parameters (E and P) In this example.3 1. (7.03 0.10 p4 0.9 1.2. and probability theory) are investigated and discussed in the following subsections.15 0. the likelihood that the displacement is in the set given by Eq.1 1.4 1.03 0. Different solution approaches (evidence theory.2 p3 0.5 0.6 0.0 1.8 e1 e2 0.5 e3 0.8 0.4 e4 e5 e6 0.7 0.05 1.

the consonant interval information can be reproduced by performing inclusion techniques. and a fuzzy set A over the referential X is defined by means of membership function: µ F from X to [0. When the given interval sets are not consonant. the possibility theory approach cannot be applied directly. The inclusion procedure proposed by Tonon et al [66] is applied to the current problem. are considered in this example.Possibility theory approach Since only parametric uncertainties. 1]. The BBAs of the obtained consonant intervals are corrected by introducing a correction mass β. indicating a continuous increase from non-membership to full membership. As a special case of BBA structure. 145 . The intervals are ordered based on the effect on the reliability index and extended to include other intervals. since the given intervals shown in Fig. The α-level cut of A is the subset defined by {x.0. For any x in X. 7. which are characteristically aleatory uncertainties. A degree of membership is associated to every element x. In this example.0 to 1. In the inclusion procedure. µ F (x) is the membership degree of x in A. it is possible to calculate bounds on the probability of system failure with a frequentistic view of fuzzy sets of possibility theory.2 are not consonant. the BBA structure can be defined as fuzzy sets when the intervals are consonant [69]. A fuzzy set is characterized by a fuzzy membership grade (also called a possibility) that ranges from 0. The referential X could be viewed as the frame of discernment in evidence theory and also as the sample space in probability theory. µ F ( x) ≥ α } . consonant intervals are constructed to give a conservative result by decreasing the loss of information. The reader is referred to reference [66] for the details of the inclusion procedure.

1497 m(e4′)= 0.1001 β m(e2′)= 0.4 e4 e5 e6 0.03+5β =0.6 0.10 Updated masses 0.03 + β + +β β + + + + β β + β β + + β β β + β + + + + β m(e6′)= 0.07-5β 0.5999 β m(e3′)= 0. the corresponding fuzzy responses must be computed via Zadeh’s extension principle [27].60 0.0305 Plausibility Plausibility of the singletons (membership function of E) E Figure 7.15-3β =0. 7.05+3β =0. When multiple fuzzy variables are considered in a functional relationship.15 0.05-β 0.05 0.60-3β 0.10-2β 0.07 1.0695 m(e5′)= 0.8 e1 e2 0.60-1β =0.0001 e3 0.1 1.15-4β 0.5 *β = 0.4.5 0.03 0.Elastic modulus factor 0.7 0. 146 .9 1.3 and 7.2 1.3 Consonant Intervals and an Approximate Membership Function for the Scale of Uncertain Parameter (E) Using the Inclusion Technique The reproduced consonant intervals and the plausibility function of the singletons are shown in Figs.3 1.07-5β =0.0 1.0503 β m(e1′)= 0. The plausibility function for focal sets is accepted as the approximate membership function of the fuzzy set in this procedure.10+β =0.

8 0.4 Consonant Intervals and an Approximate Membership Function for the Scale of Uncertain Parameter (P) Using the Inclusion Technique Based on Zadeh’s extension principle.10+5β =0.05-3β + β + + + *β = 0.1005 Plausibility Plausibility of the singletons (membership function of P) P Figure 7.10 0.05 1.03+3β =0.0499 0. Dong and Wong [34] proposed the Level Interval Algorithm (LIA).02 Updated masses 0. also called the Fuzzy Weighted Average algorithm and the vertex method.05-1β =0.03 0.9 p1 p2 0.0201 0.6997 m(p3′)= 0.70 1.10-5β β β β β β + + + + + m(p1′)= 0.3 1.2 p3 0.6 0.70-4β β β β β + 0.10 m(p5′)= 0.70-3β =0.02-2β + + β β + + m(p4′)= 0.10 0. LIA.03-β β + 0.0001 p4 β β m(p2′)= 0.02+β =0. which is basically the vertex method.7 0.1 1.0303 m(p6′)= 0.0 1.4 1.5 0. Several variation methods were developed to improve the 147 .5 p5 p6 0.Force factor 0.0995 0.10-5β =0. is reliable only for a monotonic system response.

Guh et al [71]. 148 . The reader is referred to reference [72] for the details of the LIA procedure.   Possibility           0. LIA is applied due to its simplicity of implementation.5 System Response (Displacement) Membership Function for the Three Bar Truss In this example. From the response membership function. the possibility of failure can be obtained as 0. With the approximate membership functions of uncertain variables (E and P) from the inclusion technique. Further discussions comparing the results of evidence theory and probability theory are presented later. LIA simplifies the process to obtain the fuzzy output by discretizing the membership functions of the input fuzzy variables into prescribed α-cuts.5 by LIA.1).computational performance in the fuzzy sets context by Liou and Wang [70]. the fuzzy response (displacement) is obtained. as shown in Fig.1308 for the defined failure set given in Eq. 7. and so on. (7.2.1308               δ Figure 7.

the given imprecise information shown in Fig. 7. This principle can be interpreted to mean Probability that all simple events for which a PDF is unknown have equal probabilities. when a PDF for an uncertain variable is not available. the uniform distribution function is often used.7 PDF of p (Scale of Force) Using Uniform Distribution Assumption 149 .6 PDF of e (Scale of Elastic Modulus) Using Uniform Distribution Assumption p Figure 7. e Probability Figure 7.Probability theory approach Since in the probabilistic framework probability should be assigned to only elementary events. In probability theory. which is justified by Laplace’s Principle of Insufficient Reason [73].2 is not appropriate for the probabilistic analysis.

The resulting failure probability is obtained as 0. which is similar to the joint probability density function in probability theory. we have the bound probability [0.6 and 7. but only the probability masses (BBAs) are assigned by available evidence (expert’s opinion or experimental data).0039 0.0039. there is no need to make any assumption or approximation for the given imprecise information because the BBA structure can consist of any combination of the possible subset of FD (see the three axioms of Basic Belief Assignment). The popular sampling technique. From this result.000 samples is performed for the obtained PDFs of uncertain variables (e and p). is defined by using the Cartesian product in the JFD.0345] for the system failure based on the 150 .7. The given imprecise interval information is adopted as a BBA structure itself. as shown in Figs. 7.0345]) is obtained with the cost effective algorithm. Evidence theory approach In evidence theory. The approximate PDFs of uncertain variables are obtained. there is no further information to select or approximate a PDF for the given intervals. As a result. a joint BBA structure. the Belief and Plausibility functions are evaluated and the bounded result ([0. 0. The discussions of the result are presented later.0058 for the current example.In this example. unlike in possibility theory and probability theory. by the assumption that probability mass in each interval is distributed uniformly. Monte Carlo Simulation (MCS) with 100. For multiple independent uncertain parameters in a structural system.

7. because the given information is not precise.2. possibility theory and evidence theory give a bounded result. From Fig. Figure 7.2. so it gives a single-valued result.. x 2 .2. the difference between plausibility and belief in evidence 151 .8.2. Probability theory does not allow any impreciseness on the given information. x = {x1 . However. (7. +∞].1 shows the results from each approach and corresponding computational cost.. CCFs are defined for the set δfail..3) CCFs can be interpreted in the same way as the cumulative distribution function in probability theory.2) δ fail = {δ Tip : δ Tip ≥ δ lim it . Comparison and discussions of different approaches Table 7.8 shows the Complementary Cumulative Functions (CCFs) for each measurement.given limit-state function. δ lim it ∈ δ } (7.. useful insights into the confidence of the result from UQ analysis with imprecise information can be obtained. where δ and δfail are defined as in Eqs. such as the probability.3) δ = {δ Tip : δ Tip = f ( x). The necessity in possibility theory is zero because the interval for determining the measurements is set to [δlimit.2) and (7. x n ) ∈ X } (7. It is intuitive and reasonable to obtain the bound result instead of a single value. Possibility theory and evidence theory give bounded results and probability theory gives a single-valued result. with a varying value of δlimit∈δ. Specially.

0.0039.8 Complementary Cumulative Measurements of Possibility Theory. Table 7.0058 MCS/100000 Evidence theory [0. This Ignorance reflects the lack of confidence in an UQ analysis result.1308] Solution techniques / Number of simulations LIA / 48 Probability theory 0. Probability Theory. By increasing the available data and knowledge.1 Comparison of Results and Costs for Three Bar Truss Example UQ Approaches UQ Results Possibility theory [0. and Evidence Theory for Three Bar Truss Example 152 δ .0000 0.0345] Proposed Algorithm / 17   Possibility   Plausibility Probability   Belief                               Figure 7. the difference (Ignorance) decreases to zero and the confidence on the resulting measurement increases to one.theory can be defined as another Ignorance ( = Pl − Bel ).

the degree of membership of the system response corresponds to the degree of membership of the overall most preferred set of fuzzy variables. The detailed discussions are given for the result of each approach as follows: 1) The result from possibility theory gives the most conservative value essentially because of Zadeh’s extension principle. there are no unique consonant intervals. (7. so the degree of Uncertainty is zero. as in Eq. however. in the inclusion procedure to reproduce consonant intervals.If Pl and Bel are the same for a certain limit-state value.4) x: y = f ( x ) where x can be viewed as a vector of fuzzy variables for a multiple dimension problem. Hence. For the computational cost.1 that the cost effective algorithm is useful in decreasing the computational cost. the constructed consonant intervals are dependent on the given limit-state functions.2. then it can be interpreted that there is no doubt about the resulting degree of belief of system failure. The computational performances of possibility theory and probability theory can be enhanced by using advanced techniques. In that principle. the cost-efficient algorithm has the most efficiency and generality.2. the algorithm can be incorporated to reduce the computational cost. Even in possibilistic and probabilistic approaches. and the extension of intervals in the inclusion technique is not limited to only one side.4). that is. However. it is shown in Table 7. µ F ( y ) = sup [ µ F ( x)] (7. the location where the reliability is maximized in the referential X should be correctly identified to avoid the extreme conservative result. For 153 .

the inclusion technique can give an extreme result (0 or 1) for the possibility and necessity measurements. the original intervals are extended in both directions (right and left) to be the new inclusion intervals. even though it is not clearly stated in the reference [66]. in a convex limit-state function. When the memebership function is modeled by BBA structure.5 should be smoothed by introducing other assumptions. 1. in a concave limit-state function which gives two boundary points in the referential X as maximizing reliability locations. Moreover.5]) to include the other consonant intervals with the given correction mass β. based on Zadeh’s basic idea of fuzzy sets. the membership function can be modeled as a consonant BBA structure to analyze within the evidence theory framework. Conversely. 7. the information given to an interval could lose its physical meaning.example. which is the same as the referential X ([0. Thus. there 154 . which can be viewed as a probability mass of the interval. By expanding the intervals to include other intervals in the inclusion technique. unless other assumptions or criteria are introduced for the inclusion technique. when the maximizing reliability location is at the middle of X. the BBA of e1 interval in Fig. the sharp boundaries in the approximate membership function shown in Fig. the inclusion technique can be applied only for a system for which limit-state functions are monotonic. 7.5. However. For example. is assigned to the new interval.3. In this example. the transition between membership and nonmembership of a location in the set is gradual [13]. non-consonant multiple intervals are reproduced as a fuzzy membership function to apply the possibilistic approach.

Hence. and so on). Plausibility]) that always includes the probabilistic result.is no need for additional techniques or assumptions once the α cut is accepted as a level of basic belief. 2) Contrary to possibility theory. The two main reasons that structural analysts were not familiar with evidence theory are the high computational cost and the misunderstanding of the capability of incorporating the pre-existing probabilistic information. a BBA structure in evidence theory can be used to model both fuzzy sets and probability distribution functions due to its flexibility. 155 . As discussed throughout this paper. the resulting probability is changed significantly. 3) Evidence theory gives a bounded result ([Belief. additional techniques might be required to obtain supplementary measurements (expectation. Moreover. and probability). probability theory gives the smallest prediction of system failure among the upper limits (possibility. that is. probability theory can seriously underestimate a possible event unless the additional assumption (uniform distribution) is not justified properly. which can be used in a decision making situation. variation. The consonant BBA structure can be constructed with discretized α cuts. plausibility. since it just gives a single value result. the resulting probability would be merely the reflection of the assumption on a target system with imprecise information. once an assumption is introduced. confidence bound. the lower and upper bounds of probability based on the available information. In other words. Based on different assumptions other than uniform distribution function.

discretized exclusive probability sets might be obtained. 156 . the computational cost of evidence theory can be significantly reduced by using the cost effective algorithm. Once the surrogate model is constructed. when there exist two exact normal PDFs for the scale factors. For imprecision. As shown in Table 7. It shows that even though there is no closed-form function for the given imprecise information. For example. can be viewed as the best estimate of system uncertainty. The obtained bounded result of evidence theory. and less marginal than the result of probability theory. even in possibilistic and probabilistic approaches. as shown in Fig. 7. the Belief and Plausibility evaluations can be performed efficiently by the proposed algorithm. which tends to be less conservative than that of possibility theory. the algorithm can be employed to reduce the computational cost. because the given imprecise information is propagated through the given limit-state function without any unnecessary assumptions in evidence theory. there is no additional cost for updating the result with increased information. e and p (means of one and standard deviations of 0. As mentioned previously.2). with different levels of discretization.9.1. different types of information (fuzzy membership function and PDF) can be incorporated in one framework to quantify uncertainty in a system.That is. an imprecise information situation can be assumed due to lack of information or data in the current three bar example.

10 are calculated without additional simulations due to the construction of the surrogate for the limit-state function. 157 . 7. probability.10 shows that the bound of evidence theory decreases.9 Discretized Normal PDF (N: the number of discretization) As the number of discretization levels increases.N=5 N=30 Figure 7. The updated bounds in Fig. 7. Fig. and plausibility) eventually converge to a single value by increasing the data sufficiently. This result shows that the three measurements (belief.

Probability .

 .

 .

 Plausibility .

 .

0 (7.10 The Convergence of Bel.4.3 Case Study II: Intermediate Complexity Wing (ICW) For the second numerical example.3.4. 1. are considered as multiple limit-state functions. 4. and Probability Regarding the Number of Discretization 7.  Belief True Probability     Number of discretization levels (N) Figure 7. Pl.0(in) ≤ 1 . This is a representative wing-box structure for a fighter aircraft. the structural model of an intermediate complexity wing is shown in Fig. as shown in Fig. The dominant frequency and tip displacement at the marked point. Displacement : Disptip 2. 4.1) 158 .

Combination : Disptip 0.45(in) (7.1 Figure 7.12.3.9 1.7 0. 7.2 0.8 0. Frequency : Freq ≤ 1 . 159 .5 0.0 PID: P11 1.5 Source 2 PID: P21 BBA: 0. as shown in Figs. as shown in Fig.1 1.2 1.9 1.04 P23 P24 P25 0. and the averaging discretization method [74] has been used to obtain the BBA structure with the interval mean value of the normal distribution of elastic modulus factor.0 (7.1 P13 P12 BBA: 0. 7.14 0.5( Hz ) ≤ 1 .11 and 7.8 1.2.2 0.25 0.2 1. Source 1 0.2) Freq ≤ 1 .7 0. the uncertainties are expressed by intervals of scale factor for the static loads and by an interval of statistical mean value of the elastic modulus of the skin elements from two information sources.02 P22 0.025 0.3) In this example.0 6.2 0.3.0 20.11 Scale Factor Information for Static Force from Different Sources The information of force factor from two different sources is aggregated by Dempster’s rule of combining.025 0.0( Hz ) 3.0 1.7 0. The surrogates are constructed for each limit-state function.12.5 P15 P14 0.

The number of computations also decreases by approximately 85% by using the proposed method instead of the simple vertex method.1 0 0. The benefit of the proposed method is expected to increase as the scale of the problem increases. When the limit-state function is not monotonic.12 Discretized Intervals for Elastic Modulus with Given Interval Statistics As a result.4 1.8 0.1 0.7 0. 160 .2. However. are given.2 1 .0526 plausibility. unless other considerations. the failure event can be missed and plausibility can be underestimated by using the vertex method.9 0.0] σ=0.8 1 Elastic Modulus Factor 1 .6 0.8 1. which is determined by the third limit-state function.6 Cumulative Normal distributions µ = [0.2 was obtained with multiple limit-state functions.4 0.5 0. by using the proposed algorithm. as shown in Table 7. The result of the proposed method shows us that we have as much as 0.12 0.4 0. the nonlinearity and non-monotonicity can be reflected to assess more accurate Bel and Pl measures.2 0. Table 7.2 0. such as linear variations of responses.6 Figure 7. for the failure of the wing structure.3 Number of discretized intervals: 32 0.

0526 79 161 .000 0.2 ICW Results Using the Vertex and Proposed Methods Bel Pl Number of function evaluations Vertex Method 0.000 0.Table 7.0101 512 Proposed Method 0.

162 . However. Reliability Assessment Using Evidence Theory and Design Optimization Due to the inevitable natural variability and uncertainties of design parameters in engineering structural systems. Sensitivity analyses of evidence theory are developed for effective design modification and data acquisition. the probabilistic approach might not be appropriate for RBDO unless strong assumption is accepted for the uncertainty of interest. is introduced first. probability theory has been employed in a multidisciplinary design optimization procedure to address uncertainty in a structural system. a supplementary measurement.8. plausibility decision. To address the discontinuity of the measurements (Bel and Pl). In such cases. In many engineering applications. Reliability Based Design Optimization (RBDO) techniques are developed to address the analytical certification of the performance of a structural system. the Uncertainty Quantification (UQ) using evidence theory proposed in the previous chapters is employed for the reliability assessment in a design optimization procedure with multiple types of uncertainty. Therefore in this section. it is not always possible to obtain the precise and complete information for the probabilistic uncertainty description in practice. design optimization without any consideration of a reliability or safety index might be unreliable and vulnerable to a system failure in service.

However. ck ∈C m (c k ) f −1 (U f ) ∩ c k ck (8. one needs a continuous measurement that can be used to make a decision in the sequential iterative procedure. In this principle it is assumed that the BBA for a set A. So as a supplementary continuous measurement.1 for a onedimensional example. as can be seen from Fig.8.b).1. However.1 Plausibility Decision Function The plausibility function is a discontinuous step function.9. if white probability is defined after applying Dempster’s rule of combining. such as in a procedure of design optimization in which an initial design is to be improved by considering uncertainty with evidence theory.4) as follows: Pl _ dec(U f ) = ck :ck ∩ f −1 (U f ) ≠ ∅ .1) where | | indicates the total magnitude of a proposition. a plausibility decision (Pl_dec) can be introduced by employing the generalized insufficient reason principle [73] to obtain a continuous function. The Pl_dec function is obtained for the degree of plausibility after the BBA structures are combined by Eq. 163 . can be equally distributed to the focal subsets of A. Pl_dec is obtained by calculating the ratio of the failure region to the entire region. introduced by Elishakoff (1999). 4. with the limit-state function f(x. (4.3. which is expressed by the proposition shown in Fig. in a decision-making situation. Basically. 8. m(A). when the given information is very poor. Pl_dec can also be viewed as white probability.

b) f is fail f is safe Limit x1 f -1(Uf)∩ck x2 ck=[x1 . the failure region can be obtained by integrating the H function as follows: f −1 (U f ) ∩ c = = xi .1 x2 .Limit > 0 1 otherwise 0 (8. in a Joint Proposition ck The failure region. xi −1 })dx1dx2 . b) − Limit ) (8..1. can be obtained numerically by defining the H function as follows: H (b.. f -1(Uy)∩ck.f(x. 1 x1.1.3) and where b is the vector of system deterministic parameters and x is the vector of uncertain parameters.. x ) = I ( f ( x..4) . 2 x2 . x2] Figure 8. H (b.. 2 x1.dxi H (b.1. x )dΩ 164 (8.. . 2 xi . f -1(Uf)∩ck. Then. x2 .2) where I = f . {x1 .1 The Failure Region.. Limit .1 Ω . Limit .

Plausibility and Belief. 8. Since the degree of plausibility as an upper bound is more interesting than the degree of belief. are derived. A similar procedure could be applicable for the sensitivity of belief. and the sensitivity analysis with respect to ∂m( Aij ) deterministic parameter b. As a continuous.2 Sensitivity Analysis Using Evidence Theory Sensitivity information for the quantified uncertainties that are expressed with degrees of plausibility and belief can be useful in a structural system design procedure. ∂b 165 . Pl_dec makes it possible to compute the sensitivities of plausibility with respect to other model parameters.where Ω indicates the multidimentional uncertain space. we can determine the primary contributor to the measurements. the sensitivity is derived for the degree of plausibility of an engineering structure problem that has epistemic uncertain parameters. Two sensitivities. ∂Pl . which are obtained by the limit-state function of a structural system. ∂Pl (C ) . the sensitivity analysis of plausibility with respect to a BBA of proposition. Sensitivity analysis also makes it possible to improve the current design by efficiently decreasing the quantified failure likelihood in the structural system. With the sensitivity analysis. single-valued function between Pl and Bel.

m( Ae1n ) .8.1 Sensitivity Analysis of Plausibility for BBAs of Propositions In sensitivity analysis. and Aemn indicates the nth proposition of the mth expert (e). it is our goal to find the primary contributing expert opinion for the degree of plausibility.1) can be expanded for the Dempster’s rule of combining: 166 .2. Assume that there are two experts. Additionally.2. The result from sensitivity analysis indicates to which proposition the computational effort and future collection of information should be focused. 2). which is defined by the discrepancy of belief from plausibility. as shown in Eq. (m=1. who give us their opinion for the following derivation. then Eq. is obtained by analytically differentiating the degree of plausibility by the proposition’s BBA. The sensitivities of plausibility for the BBA of a proposition.1) where Ai and Bj are combined propositions for parameters A and B. In this work. If we want to derive the sensitivity for plausibility with respect to the BBA of nth proposition of Expert 1. it is assumed that the number of experts and their intervals are given and fixed.2. m(Aemn). we can be more confident in the reliability analysis result. By decreasing the degree of ignorance. (8.2. this sensitivity analysis can be easily shifted from the sensitivity for plausibility to the sensitivity for the degree of ignorance.1): ∂ m(ck ) ∂ m( Ai )m( B j ) ∂m ( Ai ) ∂Pl (U ) c ∩U ≠0 c ∩U ≠ 0 m( B j ) = = = ∂m ( Aemn ) ∂m ( Aemn ) ∂m( Aemn ) ∂m( Aemn ) c k ∩U ≠ 0 k k (8. (8.

Ae 2 q ] and it is expanded as follows: comb[ Ae1 p .∂m( Ai ) ∂ A = ∂m( Ae1n ) ∂m( Ae1n ) 1 − m( Ae1 p )m( Ae 2 q ) (8.3) where comb[ Ae1 p . Ae 2 q ]'= ∂ ∂m( Ae1n ) ∂m( Ae1 p ) = Ae1 p ∩ Ae 2 q = Ai and the terms. Eq. Ae 2 q ]' comb[ Ae1 p .2) becomes Ae1 p ∩ Ae 2 q =∅ comb[ Ae1 p .4).5) =0 167 .4) and ∂m( Ae1 p ) on the right side of Eq. Ae 2 q ] ∂m( Ae1n ) 1 − constr[ Ae1 p .2. Ae1 p ∩Ae 2 q = Ai and constr[ Ae1 p . (8. Ae 2 q ] ∂ = 1 − constr [ Ae1 p . Ae 2 q ] = m( Ae1 p )m( Ae 2q ) .2. m( Ae1 p ) m( Ae 2 q ) Ae1 p ∩ Ae 2 q = Ai ∂m( Ae 2 q ) ∂m( Ae1n ) ∂m( Ae1n ) m( Ae 2 q ) + m( Ae1 p ) ∂m( Ae 2 q ) ∂m( Ae1n ) (8. Ae 2 q ]' (1 − constr [ Ae1 p .2. comb [ Ae1 p .2. Ae 2 q ] + comb[ Ae1 p .2. Ae 2 q ] = m( Ae1p )m( Ae2q ) . (8. are defined through ∂m( Ae1n ) the basic axioms for BBAs as follows: ∂m( Ae 2 q ) ∂m( Ae1n ) (8.2) e 1 p ∩ Ae 2 q = Ai m( Ae1 p )m( Ae 2 q ) Ae1 p ∩ Ae 2 q =∅ Using the notation.2. Ae 2 q ] × constr [ Ae1 p . Ae 2 q ]' is the derivative of comb [ Ae1 p . Ae 2 q ]) 2 (8.

we can improve a current design efficiently by changing the current deterministic (controllable) design parameters 168 . Ae 2 q ]'in Eq.9).9) Akp ∩ Ae ( k +1) q =∅ Therefore.2 Sensitivity Analysis of Plausibility for Structural Parameters It is useful to obtain the sensitivities of deterministic parameters in an engineering structural system. (8. 3.2.8) and (8.3).6) N n =1 (8. (8. 8.2.8) k=2.2.2. the combined proposition for a parameter is obtained by applying Dempster’s rule of combining sequentially. In the case that there are N experts who are giving their opinion. m c ( Akp ) = m( Aekp ) m ( A ) m( Ae ( k +1) q ) mc ( A( k +1) p ) = c kp Akp ∩ Ae ( k +1) q = A( k +1) 1− m( Akp ) m( Ae ( k +1) q ) k=1 (8.2.2.2. Ae 2 q ]'is applied.2.7) m( Ae1n ) = 1 For constr[ Ae1 p . …. due to its algebraic commutative and associative properties [36]. With the results of sensitivity analysis. when p≠n (8. when p=n =− 1 Ae1n − 1 .∂m( Ae1 p ) ∂m( Ae1n ) ∂m( Ae1 p ) ∂m( Ae1n ) = 1 . the whole procedure for sensitivity analysis with the differential of Dempster’s rule of combining is repeated for N experts. as given in Eqs. the same procedure as comb [ Ae1 p . N-1 (8.

Pl] is too large. (8. shown in Eq.10) might be quite complex or even impossible in many engineering applications. (4. because of the discontinuity of a BBA structure of an uncertain parameter.2.1). the proposed cost-efficient algorithm can be employed to construct a surrogate model of a limit-state function for each joint interval proposition. x) dΩ Ω ∂bi (8. The gradient of plausibility is approximated using the degree of plausibility decision: Pl_dec. the multi-dimensional integral of the limit-state function in Eq. The local approximations are 169 . ck ∈C = ck :ck ∩ f −1 (U f ) ≠ ∅. However.2. because the Pl_dec function is a continuous function whose value lies between Pl and Bel. Pl_dec can be used as a supplemental measurement to make a decision whether a system can be accepted or not when the resulting bound [Bel.2. The sensitivity with respect to a system deterministic parameter is derived with Pl_dec as follows: ∂Pl _ dec ∂ = ∂bi ∂bi ck :ck ∩ f −1 (U f ) ≠ ∅. ck.to decrease the expected failure likelihood in the structural system. Pl_dec makes it possible to compute the sensitivities of plausibility of the system deterministic parameters. To alleviate the numerical and computational difficulties. the plausibility function in evidence theory is a discontinuous function for varying values of a deterministic parameter. bi. Also. ck ∈C where ∂( f −1 (U f ) ∩ c k ) ∂b i m(ck ) m(ck ) f −1 (U f ) ∩ ck ck ∂H (b. However.10) ck is the gradient of the failure region with respect to a deterministic parameter.

The subspaces are determined by the given interval information for each uncertain parameter.2a.2 The Network of Local Approximations 170 . as shown in Fig.constructed in the subspaces of the total function evaluation space. a) The subspaces for local approximations defined by disjointed intervals of each uncertain parameter b) Constructing a network of local approximations Figure 8. the subspaces for local approximations are defined by the Cartesian products of the disjointed intervals of each uncertain parameter. For example. 8.

The joint interval proposition. the subspace can be further divided into more than two subspaces for better accuracy of the surrogate model. the local RSM is constructed by obtaining sampling points. b ) b) Second level subdivisions of LRSMs for the integration of failure regions Figure 8.The surrogate model of the limit-state function is expressed by the network of local approximations of each subspace. ck. a) Identifying and projecting a failure limit-state surface on the function evaluation space x 2 = LRSM j ( x1 . 8. The degree of fitness of the constructed RSM in a subspace can be checked by performing a residual analysis. is evaluated by using the surrogate model instead of the actual limit-state function. a quadratic response surface model (RSM) is selected due to its simplicity of implementation.2b. As a local approximation.3 Linear Response Surface Models (LRSMs) for Sensitivity Analysis 171 . When the fitness is not satisfactory. As shown in Fig.

For the sensitivity analysis. This numerical procedure is performed with the obtained closed-form surrogate model without a high computational cost. Since the integration of the H function in Eq. as shown in Fig. x) dΩ = Ω ∂b ∂LRSM j m j =1 Ω n −1 ∂b dΩ n−1 172 (8.2. LRSMs are constructed by selecting one of the uncertain parameters as a dependent variable with a given limit-state value.11) where m is the number of subdivisions for LRSM.2.10) is not the integration for the limit-state function value. such as a random search or an optimization technique. the failure region of the limit-state function is identified by any available technique. linear RSM (LRSM) functions are reconstructed over the network of the original RSM models by making finer subdivisions. 8. as shown in Fig.10) is also obtained as follows: ∂H (b. (8. The dimensions of the LRSM are decreased by one. Over the identified failure region. the integration term in Eq.3b. 8. After obtaining the LRSM functions. (8.2.10) efficiently.2.3. but for the failure region of the function evaluation space.12) .To perform the multidimensional integration in Eq. x)dΩ = m Ω n −1 j =1 LRSM j dΩ n −1 (8.2. the multidimensional integration of the H function is obtained by the summation of the integrations of LRSMs as follows: Ω H (b. (8.

3) where f and Gj are the objective and constraint functions.Hence. i = 1. the uncertainty constraints [Eq.2. There are many studies of RBDO with probabilistic uncertainties. uncertain design variables. and so on. for a failure event G j (d . Simpson’s rule. the multidimensional integral in Eq.3. and it is required that the value of the uncertainty measure be greater than the reliability. j = 1.3. Nr. Nr (8. X ) ≥ 0 . and a required reliability index. (8.1) Subject to U (G j (d . In the probabilistic framework. . and Nd. (8. (8. Rj. X ) ≤ 0) ≥ R j . k = 1. Nd . β t . 8. and nondeterministic uncertainty-based constraints. X is the uncertain design vector. and Ng are the number of deterministic design variables. respectively. d il ≤ d i ≤ d iu . Ng X k .3 Reliability-Based Design Optimization Using Evidence Theory The Reliability-Based Design Optimization (RBDO) can generally be formulated as: Minimize f (d ) (8. . The non-deterministic. d is the controllable deterministic design vector. respectively. as follows: 173 .10) is performed by using a conventional numerical integral scheme. after linearizing the obtained nonlinear surrogate model sequentially.2) .2)] can be characterized by failure probability. P (•) .3. uncertainty-based constraints are described by an uncertainty measure U (•) . such as trapezoial rule.3.

The traditional RBDO with Eqs. response surface models for constraint functions or reliability indices can be constructed for fast 174 . Usually.P (G j (d . or the second-order reliability method (SORM) [9-11]. Ng (8. (8. To reduce the computational cost in the inner loop. X ) ≥ 0) = Gi ( d .4) The failure probability for a jth constraint can be expressed by multiple dimensional integrations: P (G j (d . The inner loop is to find the uncertainty measurements with many repetitive simulations and the outer is the regular design optimization loop to find the optimum design that satisfies the given constraints.3. Several popular approaches are proposed for design optimization under probabilistic uncertainties.3. X ) ≥ 0 f X ( X ) dX 1 dX Nr ≤ Φ (− β t ) (8.5) where f X ( X ) is the joint probabilistic density function of all probabilistic variables. such as the first-order reliability method (FORM) [8]. X ) ≥ 0) − Φ (− β t ) ≤ 0 .3) requires a double loop iteration process with reliability objective and constraints. To address the computational difficulty. a reliability requirement is imposed on a constraint and it is well known that the multiple integration for a reliability constraint function in a practical engineering application is computationally prohibitive.3. some approximate probability integration methods have been developed.1) ~ (8. j = 1.3. .

. . the probabilistic constraint is replaced with the performance measure and the RBDO model using PMA can be redefined as Minimize f (d ) Subject to G pj (d .3. Sequential Optimization and Reliability Assessment (SORA) is presented by Du and Chen [76] using a shifting design vector to move back the boundaries of violated constraints to a feasible region based on the reliability information from the previous cycle.10) 175 . (8. (8.” The basic idea is to replace random variables with the safetyfactor based values. These methods are useful only for the functions that can be well approximated by the pre-fixed.3.8) where G pj is the jth performance measure and is obtained from a nonlinear optimization problem in U-space.probability calculation. X ) ≤ 0. Nd . [77]. defined as Minimize G pj (d .3. Ng X k .9) Subject to U = β t (8. d il ≤ d i ≤ d iu .3. Nr (8.6) j = 1. X ) (8. In the PMA+. k = 1. the original probabilistic constraints can be adjusted to reach a specified reliability target. The safety-factor based approach proposed by Wu et al. non-linearity regression-based method. [75] uses an “approximately equivalent deterministic constraint.7) . By varying the design parameters.3. i = 1. An enriched Performance Measure Approach (PMA+) is proposed by Youn et al.

probabilistic data.g. and as a fast reliability analysis under the condition of design closeness. it is found that evidence theory can provide a unique generality in the incorporation of various types of uncertainties (e. and so forth. There are several alternative frameworks to handle non-probabilistic uncertainties. there might be enough probabilistic information for some uncertain design variables and they can be expressed by well-known Probability Density Functions (PDFs). and interval information). In a practical large-scale and complex structural system. PMA+ for RBDO has three key ideas: as a way to launch RBDO at a deterministic optimum design. Therefore. as a probabilistic feasibility check. In this chapter.where β t is a prescribed target reliability. the other uncertain variables might not be described by probabilistic functions due to insufficient and incomplete data. interval mathematics [33]. such as possibility theory [13]. evidence theory [16]. 176 . Among the non-probabilistic theories. as shown in the previous chapters. reliability index or performance measure) to avoid the actual multidimensional integration of reliability of failure.. However. RBDO is tackled with multiple types of uncertain parameters in an engineering structural problem with the proposed costefficient algorithm.e. Furthermore. Most of the efficient RBDO approaches in a probabilistic framework employ representative indicators (i. the RBDO methods based on the probabilistic reliability index or performance measure are not valid for multi-type uncertain variables. fuzzy membership.

For instance.85×107 (psi).8. TH1 Spars and Ribs Lower wing skin Section for factor. and the tip displacement at the marked point in Fig.4 ICW for RBDO In this system. which represent aerodynamic lifting forces. it is assumed that there are two uncertain factors describing parameters for elastic modulus (E) and load (P).4 is considered as a limit-state function.4 Numerical Example Figure 8. Tip displacement Upper wing skin Section for factor. TH 2 Section for factor.4 shows the structural model of an Intermediate Complexity Wing (ICW) for RBDO. 8. TH 3 Figure 8. There are three deterministic factors 177 . are applied along the surface nodes. The nominal value for each parameter is fixed and the actual values are obtained by multiplying the uncertain factors. the basic value of elastic modulus is 1. Static loads.

5 2. as shown in Fig.6.875 1.15 0.75 PID: E11 BBA: 0.5 Elastic Modulus Factor Information 178 .18 E13 0.25 1.4 1.0 E12 E14 0. 8. Expert2) give their uncertain information for the two uncertain parameters with discontinuous intervals.002 0. TH2. and TH3 ).0 1.5 and 8.5 0. Because the available data for the parameters is not enough to predict precise variability.25 0.7 0.75 E23 E24 E25 0.005 Expert2 0.022 0.5 2.875 1. Expert1 0.75 E15 E16 0. It is assumed that two equally credible experts are giving their opinions with multiple intervals for each uncertain parameter with respective BBAs.0 1. Because of the lack of information.for the thicknesses of three wing sections (TH1. Let us consider the situation in which two experts (Expert1. 8.5 PID: BBA: 0. interval information is considered to be the most appropriate way to express those variabilities based on available partial evidence.26 0.015 E26 0.0 1. the interval information in evidence theory may not be continuous and intervals can overlap. The interval information for elastic modulus and load is given in Figs.4.625 E21 E22 0.001 Figure 8.015 0.25 1.

5 0.25 1. This scheme allows the expression of opinions intuitively and realistically without making assumptions to reproduce any kind of probabilistic information.5 0. The reason is that the evidence that is supporting interval E13 is not included in the evidence supporting the interval E12.6 Load Factor Information In Fig.875 PID: P21 P22 BBA: 0. The opinions from two different experts are combined using Dempster’s rule of combining.002 Expert2 0. E11 indicates the first expert’s first interval proposition for the factor E.005 0.003 0.375 1.0 1.01 0. It is the basic concept in Dempster’s rule of combining that the propositions in agreement with other information sources are given more credence and they are emphasized by the normalization with the degree of 179 .875 1.9 Figure 8.115 1.07 0.25 1.625 0.125 1. BBAs do not necessarily possess monotonicity and additivity.0 PID: P11 P12 P13 BBA: 0. the BBA of E13 is higher than that of E12.75 0. Even though the interval E12 includes the interval E13.5 P14 P15 P16 0.5.8 0.Expert1 0. Unlike probability in probability theory.09 1. 8.005 0.5 P24 P23 0.125 1.

5 0.8.75 Ec2 0.0113 Ec9 0. 8.8.00001 Ec5 0.7 and 8. 180 .0069 Ec8 0.625 Pc1 0. 0.0031 Pc3 0.3827 Figure 8.0 Pc4 Pc5 Pc6 Pc2 0. Here. 1.0054 Pc8 0.5 Pc7 Pc4 0.0″ as given in Eq.5 0.0010 Ec3 0. (8.75 1.contradiction in Dempster’s rule of combining.0 [67]) to obtain the tip displacements. our goal is to obtain an assessment of the likelihood that the tip displacement exceeds a limit value.25 Ec4 1.00006 2.00004 Pc6 0.375 Ec8 Ec5 1.00001 Ec6 0. Combined Information for Load Factor The structural analyses were conducted with the finite element analysis method by using commercial finite element analysis software (GENESIS7.875 Pc3 Pc1 0.8670 Elastic Modulus factor Ec2 0.5550 Pc8 Pc5 0.25 1.7 Combined Information for Elastic Modulus Factor 0.625 Ec1 0.5 Ec9 Ec7 Ec6 Ec1 0.0004 Ec7 0.0533 1.125 1. By using Dempster’s rule of combining.875 Ec3 1.1). the combined information is given in Figs.4.75 Pc2 Force factor 0.1133 Figure 8.0 1.0001 Ec4 0.0 1.0005 Pc7 0.

1 Intermediate Complexity Wing Results Bel ( disp fail ) Pl _ dec (disp fail ) Pl (disp fail ) Number of Simulations Proposed Method 0. Pl. are obtained by using the surrogate model.1576×10-4 degree of belief for the failure based on given partial evidence.4970×10-3 0.0" − δ Tip < 0} (8.1.1) This goal is realized by obtaining the plausibility Pl (δ fail ) for the set of δ fail with the given experts’ opinions.3198×10-2 100.δ fail = {δ Tip : 1.4.1576×10-4 0.000 181 . three measurements. Table 8.5121×10-3 0. This result shows us that we have 0. the network of response surface functions in each joint proposition. Pl (δ fail ) = mcombined (ε ) (8. The results are compared with those from the uniform sampling of the original function. It is found that the number of simulations is significantly reduced by using this surrogate model as compared to the uniform sampling scheme. and there is 0.3198×10-2 degree of plausibility to face the failure of the wing structure with the displacement limit-state function. Pl_dec. and Bel.4.3198×10-2 360 Sampling Method 0.1576×10-4 0.2) ε :ε ∩δ fail ≠ ∅ As presented in Table 8.

the sensitivity for the first interval of the first expert of parameter P factor. In this example. 8. is 0. P11. 8. If designers want a robust design. in Fig. in Fig. The difference of magnitudes of 182 . then the degree of plausibility might need to be used as an upper bound of probability.4.The degrees of belief and plausibility give the bounds of possible probabilities.1 Sensitivity Analysis The sensitivity of plausibility with respect to each proposition of each expert is shown in Figs. With those sensitivity analysis results. On the other hand. the BBA for the parameter E factor is found to be a more significant contributor to the degree of plausibility than the parameter P. 8.9 can be seen as the primary contributor for decreasing the plausibility. which is placed between Bel and Pl. 8. The information of sensitivity can be effectively used to improve the certainty of the UQ result. Pl_dec. And as a supplementary measurement. the fifth proposition of the first expert for the parameter E factor. By comparing the magnitudes of sensitivities for each parameter’s BBA. This means that the first expert’s BBA for P11 has a trivial effect on the degree of plausibility. and which expert’s opinion is a major uncertainty propagation source. But the degree of Uncertainty can be also used to determine how much one can rely on the result of UQ. E15.5121×10-3.10. one can tell which propositions have negative or positive contributions to the degree of plausibility.9 and 8.10 is almost zero.

  E11 E12 E13 E14 E15 E16 .sensitivity between the elastic modulus factor and the load factor stems from the effect of structural sensitivity of those parameters and from the formation of given interval information in each parameter.

   ∂Pl(C)    ∂m( E1 j )      Expert1       E21 E22 E23 E24 E25 E26 .

limited time. the most contributing intervals (E15 and P12) to the degree of plausibility 183 .    ∂Pl(C )   ∂m( E 2 j )   Expert2   Figure 8. based on sensitivity analysis results in this example.9 Proposition’s Sensitivities of Plausibility of Elastic Modulus Factor As mentioned previously. and so on) should be invested efficiently to quantify the uncertainty in a system. human power. the sensitivity information can be used to determine future data acquisition strategies in which limited resources (due to financial budget. For example.

three thickness factors for three sections of ICW.10 Proposition’s Sensitivities of Plausibility of Load Factor For the sensitivity of plausibility with respect to deterministic parameters.should be investigated by investing resources to collect more data on the intervals. In this example. The purpose is to evaluate the effect of those parameters on the degree of plausibility.4.11 shows the resulting sensitivities for each thickness factor. and by refining the intervals to obtain a more reliable UQ result. the 184 . 8. as shown in Fig. are considered. Figure 8.   ∂Pl(C ) ∂m( P1 j )  P11 P12 P13 P14 P15 P16         Expert1      " P21 P22 P23 P24 % & ' (  #  $ ∂Pl(C )   ∂m( P2 j )  $  # Expert2  "  ! Figure 8.

/+/ )+*-. TH 2. In general.11 Sensitivity of Plausibility with Thickness Factors (TH 1. 1+/ )+*-. it is found from Fig.sensitivities for plausibility have the same trend as the sensitivities for the limit-state function with respect to those deterministic parameters because the deterministic parameters are independent of the uncertain parameters and the limit-state function is also monotonic for the deterministic parameters. the sensitivity of plausibility does not necessarily have the same tendency as the sensitivity of deterministic analysis for a system deterministic parameter because of the dependency of plausibility on the uncertain parameters. 8. when a desired level of plausibility in a system has to be achieved with given imprecise information for uncertain parameters. 0+* )+*-. the plausibility could be efficiently controlled by changing the values of other deterministic parameters with the obtained sensitivity information. .+* Figure 8. 1+* *-. For instance. 2+* TH 1 TH 2 TH 3 3 4 5 )+*-. and TH 3) The sensitivity results with respect to deterministic parameters could be used in a reliability-based design phase. That is. *-. *+/ ∂Pl ∂TH i )+*-.11 that the designer can decrease the failure plausibility to a desired level in the current system much 185 .

as shown in Fig. The model consists of 62 quadrilateral membrane elements with uniform upper and lower skin thicknesses (0. Xd ={TH1.4.more by increasing the value TH 3 than by increasing the value TH 1.4 shows the structural model of the Intermediate Complexity Wing (ICW). The following are the limit-state functions in the context of evidence theory. Xu={E. TH2. and the uncertain variables define the finite uncertain parameter hyperspace with the frame of discernment. The additional Pl_dec measurement has been employed to address the sensitivity analysis problem. 8. 8. Hence. The thicknesses are expressed using three thickness factors. TH3}.0 [67]. It is assumed that uncertainties exist in the scale factors of the elastic modulus (E) and the applied force (F) in the structural model. to demonstrate design optimization based on reliability analysis using evidence theory. The sensitivity offers an appropriate and efficient tool for a robust system design based on reliability prediction.2 Reliability Based Design Optimization Fig. The design variables define the design space of interest for the design optimization with given side bounds. 186 . in the ICW model. The structural analysis of ICW is performed by finite element analysis (FEA) using GENESIS 6. and TH3. Xu}.25 in). is denoted as the function evaluation space. X={Xd. This work is the first attempt to develop the sensitivity analysis of an uncertainty quantification problem using evidence theory. 8. The total space of both types of variables. and uncertain variables. there are deterministic design variables. Aerodynamic loads are applied along the wing surface. for three parts of the model. TH2. F}. which was used in the ASTROS manual [42]. TH1.4.

3) Freq ≥ 1 .5].0000 Pl_dec 0. are [0. hence. In this ICW example.0 2.4. as shown in Table 8. the design variables are the scale factors of skin thickness that are controllable and free from uncertainty.4) For example. 187 . The uncertainties in those parameters determine the uncertainties in responses of the system.1230 Pl 4.2 Failure Degrees of Belief. TH1. the failure degrees of belief. when the thickness factors.3( Hz ) (8.99). and TH3.4. Plausibility Decision.2 0. The degrees of plausibility and belief are discontinuous.0(in) ≤ 1 . for each limit-state function.3 0.4101 For the ICW model. and Plausibility ×10-5 Limit state Tip displacement Frequency Bel 0. and plausibility are obtained.3174 0. the sensitivity of plausibility with respect to structural design can be obtained. we assume that uncertainties in the elastic modulus and the applied force are inevitable. By using the Pl_dec measure.9488 0. Table 8.0 (8. The objective is to minimize the volume of ICW while placing the constraint on the degrees of safe plausibility for the limit-state functions that should be greater than an acceptable degree (0.2.0005 0. plausibility decision. TH2. the degree of plausibility decision (Pl_dec) is used for the constraint functions.Tip displacement : Frequency : Disp tip 7.

2. the sequential quadratic programming (SQP) method with BFGS formulation is selected for the following optimization problem: Objective: To minimize the total volume of the wing for a lighter aircraft.0]) Design variables: The scale factors of the thickness for each part of the wing (TH1.99. The optimum result is obtained in 12 iterations with 112 function evaluations. and TH3). 2. 8. At the optimum. Both displacement and frequency constraints are active for the obtained optimum.0 [78]. In MATLAB 6. Those safe degrees of plausibility decision for each tip displacement and fundamental frequency 188 . Constraints: The safe degrees of plausibility decision (Pl_dec) for limit-state functions (tip displacement and fundamental frequency) >0.12 shows the optimization history of the objective function and design variables for the wing skin. the safe degrees of plausibility decision of both constraints are computed with actual simulations to validate the optimum result. TH2. Fig. the safe degrees of plausibility decision (Pl_dec) for both limit-state functions are 0.The response surface method for a partially suspected proposition has been employed to obtain the gradient of the failure region [18].99 Side bounds of design variables ([0. that is.

Since the MPA model is constructed with uncertain parameters and deterministic variables all together at the initial stage. in3) Wing thickness (TH1.constraints are 0. the creation of the MPA model is required only once in the total iterative procedure of design optimization.12 The Optimization History of Objective Function and Design Variables 189 . and TH3) of both the deterministic design variables and the uncertain variables.9921 and 0.302]. TH3*] = [0. [TH1*.423 FEA simulations are needed to construct the MPA model. The number of actual simulations is highly dependent on the accuracy of the local approximation method and the size of the failure region in the function evaluation space Total Volume (OBJ.9995. a total of 5. OBJ*= 431 (in3) Iteration Number Figure 8.208 0.245 0. TH2*. TH2. In each function evaluation. In this ICW example. the degrees of plausibility decision for both displacement and frequency constraints should be obtained for the uncertain function evaluation space.

In this method. Trust Region Based Reliability Optimization (TRBRO) can be proposed. as in probability theory. the computational cost for an optimization routine might be very expensive and prohibitive. To increase the efficiency of the proposed method. To avoid the high computational cost of both the optimization (outer) loop and the UQ (inner) loop. local variable 190 .. Trust region approaches [79. the key idea is to define trust regions for both deterministic design variables and uncertainty variables in sequential optimization iteration. The deterministic optimum design will have a reliability of approximately 0. In those cases.By constructing the MPA model at the initial stage in this ICW example. 80] manage the selection of move limits (i. the whole uncertain function evaluation space defined by only uncertain variables can be included in the failure region for some levels of deterministic variables.e. UQ using evidence theory is performed only for a limited trust region of uncertain variables with a surrogate model to reduce computational cost and the partial measurement (plausibility or belief) from the trust region is employed as a representative UQ indicator.5. instead of the reliability index or performance measure. The overall design procedure is to move the deterministic optimum design back to a reliability-based optimum design with the trust-regional sequential scheme. RBDO can start from the deterministic optimum design with mean-like values of uncertain variables similar to SORA and PMA+. an efficient sequential optimization strategy for multi-type uncertain variables.

d tU ] . Nd .7) is solved around the current trust region of dt and Xt. d tl. The move limits are defined by the trust region ∆ d . Eq. i = 1. X tl. Ng .k . At the tth iteration. which defines the local bounds [ X tL . the degree of plausibility of evidence theory. Nr (8.4. respectively.3.i ≤ d t .k ≤ X tu.6) ~ Subject to Pl (Gt .t . X ) ≤ 0) ≥ R j . (8. ∆ X . Similarly.7) . It is required for TRBRO that the probabilistic variables with unbounded PDFs are described as bounded uncertain variables by lumping marginal probability onto trimmed boundaries appropriately.i ≤ d tu. a local optimization problem is formulated with surrogate models in a limited trust region from Eqs. Since surrogate models are used for UQ in uncertainty constraints in a defined trust region. . ∆ t defines a hypercube around Xt and dt.i . (8.4.t is defined for uncertain variables.4. uncertainty measure (degree of plausibility) can be obtained with minor 191 .8) where Pl.3. j (d .t and the p norm define the shape of the region. is selected for the measure of uncertainty as an upper bound of probability.4.3) as follows: ~ Minimize f t (d ) (8. X tU ] and [d tL . j = 1.1) ~ (8.bounds) for each sequence of approximate minimization based on the value of the objective and constraint functions obtained in the previous sequence.k ≤ X t . The move limits of a trust region are restricted by the global limits (the entire function evaluation space) of deterministic and uncertain variables. k = 1. (8. where d − dt p ≤ ∆ d . In this paper.

computational cost. unless the trust region reaches the global bound of uncertain variables.13 Trust Region Uncertainty Quantification for Sequential Optimization Under Multiple Types of Uncertain Variables 192 . On the other hand. the trust region of uncertain variables is updated to keep the degree of failure plausibility in the trust region to be greater than the required one. The trust region of deterministic design variables is traditionally updated based on the value of the previous objective and constraint functions. Constructing the Entire Function Evaluation Space (EFES) Deterministic Design Optimization Construct Surrogate Models for the defined TR Define initial Trust Region (TR) for both deterministic and uncertain variables Reliability-Based Design Optimization within the TR Yes Converged? No Pl_dec = Pl_dec target No Update TR of deterministic variables No Update TR of uncertain variables Yes End Yes Does TR reach the global limits of uncertain variable? Figure 8.

TRBRO is robust and efficient. and in most engineering problems very high reliability (i. 193 .It is noted that the initial design of the reliability-based optimization has approximately 0.5 reliability. 0.9 or six sigma reliability) is required. there is no iterative procedure for the UQ level and the actual UQ is performed instead of using an approximating UQ index. The continuous Pl_dec function is used as a supplementary measure in TRBRO to obtain UQ sensitivity regarding deterministic design variables. Hence. 8. but also for probabilistic variables by defining reasonable trimmed boundaries. The conceptual numerical procedures for the proposed TRBRO are illustrated in Fig.13 In the proposed method.. As a result. the failure region might be very small compared to the entire function evaluation space. The proposed method is not only valid for nonprobabilistic variables. The updating procedure of the trust region of uncertain variables needs to check the failure surface and its sensitivity information. the computation cost is reduced significantly by limiting the computational region of the UQ calculation region. That is.e.

Until now.9. intensive computational cost might be inevitable when quantifying uncertainty using evidence theory. Summary As a generalization of classical probability and possibility theories from the perspective of bodies of evidence and their measures. 194 . when multiple types of uncertainties coexist in a target structural reliability analysis. but also epistemic (non-random) uncertainty could be tackled in its framework without any baseless assumptions. evidence theory can handle both epistemic and aleatory uncertainties in one framework. UQ analyses have been performed by treating them separately or by making assumptions to accommodate either the probabilistic framework or the fuzzy set framework. The Basic Belief Assignment (BBA) structure in evidence theory usually is not a continuous explicit function for the given imprecise information. Evidence theory allows for preexisting probability information to be utilized together with epistemic information (certain bounds or possibilistic membership function. Hence. the possibility of adopting evidence theory as a general tool of UQ analysis for multiple types of uncertainties was investigated. Because of the discontinuity in BBA. not only aleatory (random) uncertainty. etc) to assess likelihood for a limitstate function. It was found that because of the flexibility of the basic axioms in evidence theory.

In the effort of reducing the computational cost further. there is no restriction on the valid bounds of the design modification in the use of SMI. As a new class of linear system solver that combines a direct solver and an iterative solver for the first time. a new direct and exact reanalysis technique.To alleviate the intensive computational cost. optimization and approximation techniques were employed to identify the failure region and invest the computational resources only on the identified failure region. the Successive Matrix Inversion (SMI) method. the proposed Combined Iterative (CI) methods (SMI and BSI) can be efficiently applied in a design optimization to reduce 195 . The SMI method includes the capability to update both the inverse of the modified coefficient matrix and the modified response vector of a target structural system by introducing an influence vector storage matrix and a vector-updating operator. In the algorithm. The SMI method is utilized in an iterative reanalysis procedure to accelerate the convergence rate and even to make an iterative solution converges that may have diverged otherwise. It was found that the Belief and Plausibility functions were computed efficiently without sacrificing the accuracy of resulting measurements by using the proposed cost-efficient algorithm. Since the cost of reanalysis using SMI is flexible to the ratio of the changed portion to the initial coefficient matrix. the SMI method is especially effective for a regional modification in a structural FEA model. is developed based on the binomial series expansion of a structural coefficient matrix. a cost-efficient algorithm using MPA was developed. The SMI method gives exact solutions for any variations to an initial design of a Finite Element Analysis (FEA). that is.

The result from possibility theory gives the most conservative bound ([0. In that principle. so it gives a single-valued result. BSI with SMI can be efficiently applied for a general. In the comparison study of different reliability approaches. non-symmetric coefficient matrix. probability theory does not allow any impreciseness on the given information. and so forth) can be incorporated in a unified framework of evidence theory to quantify uncertainty in a system. that is. Since there is no computation for building up orthogonal basis vector in BSI. That is. It was found that a BBA structure in evidence theory can be used to model both fuzzy sets and probability distribution functions due to its flexibility. the degree of membership of the system response corresponds to the degree of membership of the overall mostpreferred set of fuzzy variables. However. interval information. PDF. It is found that with the cost-efficient system reanalysis techniques and UQ algorithm. 196 . the Binomial Series Iteration (BSI) method. non-symmetric coefficient matrix. which always includes the probabilistic result. Necessity]). the general UQ framework of evidence theory can be applied to practical and large-scale engineering applications successfully. the lower and upper bounds of probability based on the available information. is developed and demonstrated with numerical examples. multiple types of information (fuzzy membership function.the computational cost of many repetitive simulations for a general. A new iterative method. possibility theory and evidence theory give a bounded result. essentially because of the Zadeh’s extension principle. Plausibility]). Evidence theory gives an intermediate bounded result ([Belief.

The sensitivity of a deterministic parameter in an engineering structural system was developed to improve the current design by decreasing the failure plausibility of a limit-state function efficiently. which was introduced by employing the generalized insufficient reason principle to the plausibility function. 197 . In sensitivity analysis of plausibility with respect to an expert opinion. can be viewed as the best estimate of system uncertainty because the given imprecise information is propagated through the given limit-state function without any unnecessary assumptions in evidence theory. This sensitivity analysis can be easily shifted from the sensitivity for plausibility to the sensitivity for uncertainty. which is defined by the subtraction of belief from plausibility. the plausibility function in evidence theory is a discontinuous function for varying values of a deterministic parameter because of the discontinuity of a BBA structure for uncertain parameters. The result from sensitivity analysis indicates on which proposition the computational effort and future collection of information should be focused.The obtained bounded result of evidence theory. The gradient of plausibility was represented using the degree of plausibility decision (Pl_dec). Pl_dec can be used as a supplemental measurement to make a decision as to whether a system can be accepted. we can be more confident in the reliability analysis result. which tends to be less conservative than that of possibility theory and less restrictive than the result of probability theory. Therefore. it was my goal to find the primary contributing expert opinion for the degree of plausibility. By decreasing the degree of uncertainty. However.

The resulting optimum design of a target structure has a robust optimal performance for the intrinsic uncertainties. To avoid the high computational cost of both the optimization (outer) loop and the UQ (inner) loop. Hence. UQ using evidence theory is performed only for a limited trust region of uncertain variables with a surrogate model. The proposed method starts from the deterministic optimum design with meanlike values of uncertain variables. an efficient sequential optimization strategy for multi-type uncertain variables. evidence theory was applied to the design optimization based on reliability analysis by using an efficient sequential optimization strategy. is proposed. In the proposed method. Future Directions As mentioned earlier. the optimized design might result in a catastrophically high risk. many structural researchers have started to show their interest in evidence theory and its 198 . in conjunction with the cost-effective algorithm and sensitivity techniques. due to the physically appealing theoretical strength. operating conditions. without considering the uncertainty in design parameters. Trust Region Based Reliability Optimization (TRBRO). evidence theory is not well known in the structural mechanics societies. and moves the deterministic optimum design back to a reliability-based optimum design with the trustregional sequential scheme. and physical behavior.For the high efficiency design of engineering structures. Recently. mathematical optimization techniques are usually employed. However. the key idea is to define trust regions for both deterministic design variables and uncertainty variables in sequential optimization iteration. similar to SORA and PMA+.

in this work.applications. slightly different reasoning can make a big difference in the result of UQ.. The model form uncertainty is fundamentally epistemic and it can be tackled properly by the framework of evidence theory. Some of the issues in UQ and system reanalysis techniques are listed as follows: First. only the parametric uncertainty is addressed. there are still many issues in discussion. The correlation effect of both uncertain variables and multiple sources must be considered for unbiased reliability analysis. It is important to appropriately incorporate pre-existing probabilistic or possibilistic information in the framework of evidence theory. we need to investigate the method of the aggregation of imprecise multiple uncertainty information from multiple sources. Second. There are some attempts to express the model form uncertainty from the probabilistic viewpoint. Third. The information conversion must be studied for the appropriate and reasonable translation of the different formats of belief assignments because in some case. However. it is addressed that the BBA structure can express other types of information (e. the uncertainty from an imperfect or vague model form can be more influential and critical to the uncertainty propagation in a system. In practice. 199 .g. possibilistic and probabilistic distributions) because of its flexibility.

sensitivity analysis. more efficient methodologies can be still developed by using different approaches. Fifth. Even though many computational methods are presented in this work. the proposed Combined Iterative (CI) method is also needed to be studied to provide some solid and efficient guidelines for combining schemes between SMI and an iterative method. and optimization techniques for the better computational performance in an engineering structural design. 200 .Fourth. advanced computational schemes regarding evidence theory can be investigated for (sampling-based or analytical) reliability analysis.

No. CO.. 5. Vol. Vol. Haftka. Chen. and Helton J. and Ginzburg. Helton. “Propagation of Uncertainty in Risk Assessments: The Need to Distinguish Between Uncertainty Due to Lack of Knowledge and Uncertainty Due to Variability. H. AIAA-2002-1569. C. 4. 133-144. 707-712. References 1. pp. New York. 483-511... R. Denver. “Comparison of Probabilistic and Possibility Theory-Based Methods for Design Against Catastrophic Failure Under Uncertainty.. and Rosca R.. Vol.. 14. F. L.. Ferson. 6. and Parsons. 1996. April-2002. O. pp. Hoffman. Applications of Uncertainty Formalisms.” Non-Deterministic Approaches Forum. 1998. L. “Treatment of Uncertainty in Performance Assessments for Complex Ssytems. “Investigation of Evidence Theory for Engineering Applications. C..” ASME International Conference on 201 . pp. E. pp. Oberkampf.” Risk Analysis. J. Hunter. R.. 5. S. A. No. and Hammonds. 14. Cudney. 1994.. J. H. W. S.. Nikolaidis. 3. S. 4.10. Springer. 1994. 54.. 2.. S.” Reliability Engineering and System Safety. 8-32. “Different Methods are Needed to Propagate Ignorance and Variability.” Risk Analysis..

“Asymptotic Approximations for Mutinormal Integrals. pp. S. 1999. 1991 13.. 9.. 1965. 1992. Vol. L. D. Vol. Wood. Vol. 111-121. pp.. and Lind. 14.” Journal of the Engineering Mechanics Division. Sept. 53. M. 335-341. Hasofer. “Exact and Invariant Second-Moment Code Format. “Two Second Order Approximations to the Failure Probability. P. pp.” Fuzzy Sets and Systems. “Engineering Design Calculations with Fuzzy Parameters. 44. K. ASCE. L. 1984. G. 1974.. G. R. pp. AIAA paper 99-1570..” International Journal of Computers and Structures. 10. 357-366. NE. “Fuzzy Sets. Tvedt.. No. Tvedt. K. C. K. ASCE. N. ASCE. NJ. 1-20. Breitung. A/S vertas Research. Penmetsa. 16. N. Norway. A. A Mathematical Theory of Evidence. C. 1103-1112..” Section on Structural Reliability. “Efficient Estimation of Structural Reliability for Problems with Uncertain Intervals. “Distribution of Quadratic Forms in Normal Space-Application to Structural Reliability.. 1976. “The Monte Carlo Method.. Shafer. Las Vegas. 15. 7. 1949. 8. E. Vol. Metropolis. and Antonsson. Springer-Verlag..” Journal of the Engineering Mechanics Division. 1990. Ghanem. Stochastic Finite Elements: A Spectral Approach. 100(EM). pp.. Hovik.110. Vol..116.. 338-353.” Information and Control. R. 1183-1197. Princeton. and Spanos. Otto. R. N. L. 80. pp. pp.” Journal of the American Statistical Association. Vol.. 2002. 12.Design Theory and Methodology.” Journal of the Engineering Mechanics Division.. 202 . 1984. K. L. V. 11. and Grandhi. Zadeh. and Ulam. NY.. 8.3.

Wang... 127-136. pp. Configuration and Shape Design.. Xu. D. L. R. “Maximum Likelihood from Incomplete Data Via the EM Algorithm. 12. S. 39. 1999.B.. P. V. “Improved Two-Point Function Approximations for Design Optimization. WA. Campos do Jord. “Multi-Point Approximations: Comparisons Using Structural Size. V. 9. No. and Grandhi. Seattle. and Maitre.. and Rubin. 33. Chen. Vol. and Grandhi. 29-35. Xu. “Multi-Point Approximation Development: Thermal Structural Optimization Case Study. Vol. J. S. pp.. 177-185.. and Helton. W. Dempster. “Mathematical Representation of Uncertainty. No. “Data Fusion in 2D and 3D Image Processing. 2001.. L.. 21. H. S. 1.. 24..V. R. 2000.” AIAA Journal.” Journal of Aircraft. 48. 23. “Structural Optimization with Thermal and Mechanical Constraints. 18... A. Ser.M. Brazil. “A Modified Dempster-Shafer Theory for Multicriteria Optimization. and Rao. and Grandhi. 1998. pp.” X Brazilian Symposium on Computer Graphics and Image Processing.. Wang. Oberkampf. and Grandhi. Oct. C.P. 177-201. V. AIAA20011645. Vol. Laird.” Statistics Society. P. I. N. S... L. pp. 1151-1164.” Structural Optimization. 1-38. Vol. 20. pp. L.. No. 30. 203 ..” International Journal for Numerical Methods in Engineering. R.” Engineering Optimization. 1996.17. R.. 1720-1727. pp. 19. Bloch. 22. 1. 36. 1997. 1977. Vol. pp. 1995. Vol.” Non-Deterministic Approaches Forum.

R.. “The Concept of a Linguistic Variable and its Application to Approximate Reasoning. L. London. pp. and Conover. 28. V. L.. PA.. 26. R. Zadeh.” AIAA Journal.-T. Vol. 1721-1738. Wang. Vol. 103-111. Academic Press. 30. Vol.. McKay. 33.. Introduction to Interval Computations. Vol. T. 1994... 29. Methuen & Co. Wu. “Structural Reliability Optimization Using An Effective Safety Index Calculation Procedure. J. No.. 32.25. E. A.. and D. Moore.... pp. 1995. A. V.” Journal of Information Science. Alefeld. Hammersley.. pp. Vol. H. 1979. 1994. “Effective Safety Index Calculation for Structural Reliability Analysis. 204 . W. No. 21. Vol. and Cruse. Y. pp. 28... Wang. 1663-1669.” International Journal for Numerical Methods in Engineering. Methods and Applications of Interval Analysis. Monte Carlo Methods.. R. 8. P. J. 199249. 1990. L.. 9. 1. D. Y..-T.. R. 31. “Computational Method for Efficient Structural Reliability and Reliability Sensitivity Analysis. D. and Grandhi. Millwater. No. 27. 1964. 239-245. Philadelphia. and Hopkins. 1319-1336. Wu. 1983. 2. Ltd. 1979. J.” Technometrics. 1975. Grandhi. pp. A. Handscomb. New York. 52.” International Journal of Computers and Structures. P. and Herzberger. SAIM Publ. 32. G. “A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code. R. J. Beckman. pp. “Advanced Probabilistic Structural Analysis Methods for Implicit Performance Functions. M. 38.” AIAA Journal.

“Review of Shafer’s A Mathematical Theory of Evidence. 21. “Fuzzy Weighted Averages and Implementation of the Extension Principle. L. 1976. and Shah. McGraw-Hill. 36.” Journal of the Structural Division. 5. J. Vol.. Vol. Arora J. pp... 783-802. 44. W. pp. Probability and Statistics for Engineers and Scientists. 42. S.” Artificial Intelligence Magazine. 24. Barthelemy.. 39. R. 129-144.. 1987.M. pp.34. “Survey of Structural Reanalysis Techniques. and Wong. Prentice. 37. 1997. “On the Dempster-Shafer Framework and New Combination Rules. 1987.. 41. 81-83.” Fuzzy Sets and Systems.. 102.. Yager.” Fuzzy Sets and Systems.. Sandia National Laboratories. 40. Dong. 1991. R. Inagaki. S. Introduction to Optimum Design. CA. 1993. IEEE Transactions on Reliability. 93-137. and Ferson. New York. 205 . S.. W. Vol. 1984. “Approximation Concepts for Optimum Design—a Review. C. “Vertex Method for Computing Functions of Fuzzy Variable. 41.. April 2002. Inc. 38. Vol. 2. S. No.. 1987. H. Dong. R. pp. pp.” Structural Optimization. pp. 43. E. Torrance. F.” SAND2002-0835 Report. 35.” Information Sciences. T. pp. New Jersey. 40. M. 1989. T. Arora. and Haftka. 65-78. 1998.. M. Sentz. Zadeh. R. Vol. K. “Combination of Evidence in Dempster-Shafer Theory. Vol.. Walpole. 5. ASTROS Theoretical Manual for Version 20 – Universal analytics.. 182-188. Vol. 183-199. ”Interdependence Between Safety-Control Policy and Multiple-Sensor Schemes Via Dempster-Shafer Theory”.-F. J.

7-14. R.K. 1989. Y.V..” SIAM Journal on Scientific and Statistical Computing. 1993. A. No. 856-869. Haftka. Filatov. Vol.. 13. New York. pp. 7. E. “Bi-CGSTAB: A Fast and Smoothly Converging Variant of Bi-CG for the Solution of Nonsymmetric Linear Systems. H. 206 . H. R. and Schultz. R.” Computer Methods in Applied Mechanics and Engineering.. 49. T.A.. 4. No. John Wiley & Sons.” SIAM Journal on Scientific and Statistical Computing.A. 47. “Preliminary Design of Composite Wings for Buckling.. Vol. Starnes. Vol.E. G. 48. Box. H. Van der Vorst. “Approximate Techniques of Structural Reanalysis. 564-570. 1987. T.. J. Inc. Evolutionary Operation: A Statistical Method for Process Management... R.. 16. “Two-point Constraint Approximation in Structural Optimization. Vol. 1. and Haftka. T. A. 6. 1969.” Structural Optimization. T. Mitchell. V.” Statistical Science. Nachlas. Welch. Saad. No. H.. 51. and Lowder. Watson.. pp. 1986. M. 1974. pp. P.. 409-435. and Desai. 46. Vol. 1976. and Polynkin. L. “Design and Analysis of Computer Experiments. J. pp.. 60. A. Vol. and Wynn. P. J. “GMRES: a Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems.” AIAA Journal of Aircraft. 52. A. 4. 50. 801-812. pp. W. Rizzo. Toropov. 1992. J... N. 631-644. J. “Multiparameter Structural Optimization Using FEM and Multipoint Explicit Approximations..T. 289-301.” Computers and Structures. pp. Vol.. Stress and Displacement Constraints... Noor.3.45. 4. and Draper.. H.. pp. Sacks. A.

” Journal of Computational and Applied Mathematics. “Efficient-Accurate Reanalysis for Structural Optimization..53. 50. 12. J. W. pp.. 1587-1606. F. H. pp.” Journal of the Structural Division. M.-F.” Computer Methods in Applied Mechanics and Engineering. Graham. H... M. H. “Efficient Reanalysis of Modified Structures.. 1663-1669. T. 207 . pp. “Fast Exact Linear and Non-linear Structural Reanalysis and the Sherman-Morrison-Woodbury Formulas. Riley. 54. 1999. 673-679. No. 1950. “Two Point Exponential Approximation in Structural Optimization. “Inverting Modified Matices. 123. Woodbury. Kirsch. and Powell. Princeton. 60. 2000. Ohsaki. and Barthelemy.. 79.. J. D. 124-127. Vol. Vol. 58. Vol. 1971.. M. A.. Vol. U.” AIAA Journal. pp. 59. “Random Search Method Based on Exact Reanalysis for Topology Optimization of Trusses with Discrete Cross-Sectional Areas.” Annals of Mathematical Statistics. H. M.” Memorandum Report No.. Fadel. J.” International Journal for Numerical Methods in Engineering. M. 20. 1989. 42. Princeton University. Statistical Research Group. 1950. Kavlie. Akgün. Saad. pp. and Haftka. “Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix.” International Journal of Computers and Structures. 57. pp.. Vol. M. J. 56. 37. Sherman. 55.. Vol.. NJ. Garelon. and Van der Vorst. 2001. and Morrison.. pp. 2001. 377-392. G.. 97 (ST1). 1-33. R.. “Iterative Solution of Linear Systems in the 20th Century. 60. Vol. Y. G. 289-301.

. Chickermane. Oxford University Press.. Boca Raton. J. Dordrecht. Z. P. and Gea. A First Course in Fuzzy Logic. T. Boca Raton.” Reliability Engineering and System Safety. R. 1997.” Fuzzy Sets and Systems. Vol. London. S. Tonon. 1992.. Walker. Morgan Kaufmann. H. C. 38. Bernardini. “Random Sets and Fuzzy Interval Analysis. M. 68. N. 3rd ed.. Mechanics of Laminated Composite Plates: Theory and Analysis. 308-312.. Numerical Computation in Science and Engineering. 67. Reddy. CRC Press.. Nguyen. 1985. Heuristic Reasoning about Uncertainty: An Artificial Intelligence Approach. 49. 39. “Fuzzy Weighted Average: an Improved Algorithm. 70. F. T... H. C. pp. 62... 1990.. pp. 70. 65. 66. 307-315. 1997. Vanderplaats Research & Development.. 2000.. pp. J. R. D. Elements of Structural Optimization. H.. and Mammino. 63. A. 1992.. New York.” Fuzzy Sets and Systems. 1996. T. 829-846. Cohen. Dubois. Vol. Netherlands. E.. 241-261. H. CRC Press. Haftka. “Determination of Parameters Range in Rock Engineering by Means of Random Set Theory. A. Vol.. Pozrikidis. 1998. GENESIS User Manual.61. and Gürdal.. 2000. Colorado. A. and Prade. and Wang. Kluwer Academic Publishers. 69. pp. 208 . Liou. Inc.” International Journal for Numerical Methods in Engineering. “Structural Optimization Using a New Local Approximation Method.. 64. Vol.

AIAA paper 2001-1645. New York. 85. “Safety Factor Based Approach for Probability-based Design Optimization. 2004.. B. 1997.. 76. Dover Pub.. K. and Lee.” Computers and Mathematics with Applications.. MATLAB Optimization Toolbox User’s Guide.. MA.. 2004.71. E.. R. Du. D. 169-181. Albany. R.” ASME Design Engineering Technical Conferences. pp. L.. Tonon. H. pp. Sues. M. NY. April.. 32. C.” Reliability Engineering and System Safety. Wang.” 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference. R.. Choi. New York. 74. Vol. M. 1996. AIAA-2004-4401. Y-T.. and Otto. S. 75. Shin Y.. K. 1997. F. WA.. and Chen. Seattle. 78. K. Structural Dynamics and Materials Conference. 72. Inc. X.” 42nd AIAA/ASME/ASCE/AHS/ASC Structures. 2001. Vol.. Y. Antonsson. Guh. 209 . Natick... and Cesare. DETC2002/DAC-34127. Y. Canada. Youn. Fuzzy Information Engineering: A Guided Tour of Applications. Prade.. The MathWorks.. editors. “Fuzzy Weighted Average: a Max-min Paired Elimination Method. N. C. Montreal. “Sequential Optimization and Reliability Assessment Method for Efficient Probabilistic Design. “Enriched Performance Measure Approach (PMA+) and its Numerical Method for Reliability-Based Design Optimization. Wu. Savage. 2002. and Du. “Using Random Set Theory to Propagate Epistemic Uncertainty Through a Mechanical System.. 1972. The Foundations of Statistics.. E. J. 73. K. John Wiley & Sons. Hon. L. W. 77.. Yager. 115-123. Improving Engineering Design with Fuzzy Sets In: Dubois.

C. pp. and Trucano. M. D. and Sorensen..” SIAM Journal on Scientific and Statistical Computing.. 1983. 2002. Moré. 210 . GA. J. “Computing a Trust Region Step. Atlanta. 553-572. 3. Giunta.79. S.” 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization. Eldred. A. G.. J. F. “Formulations for Surrogate-Based Optimization Under Uncertainty. S. Vol. 80.. A. AIAA-2002-5585. T.. Jr.. Wojtkiewicz..