You are on page 1of 29

CHAPTER 1

INTRODUCTION

1.1 Vulnerability curves or Fragility curves


Vulnerability curves relate strong-motion shaking severity to the probability of reaching or
exceeding a specified performance limit state. Strong-motion shaking severity may be expressed
by an intensity (I), peak ground parameters (a, v or d) or spectral ordinates (Sa, Sv or Sd)
corresponding to an important structural period. Vulnerability curves play a critical role in regional
seismic risk and loss estimation as they give the probability of attaining a certain damage state
when a structure is subjected to a specified demand. Such loss estimations are essential for the
important purposes of disaster planning and formulating risk reduction policies. The driving
technical engines of a regional seismic risk and loss estimation system are:

• Seismic hazard maps (i.e. peak ground parameters or spectral ordinates).

• Vulnerability functions (i.e. relationships of conditional probability of reaching or exceeding a


performance limit state given the measure of earthquake shaking).

• Inventory data (i.e. numbers, location and characteristics of the exposed system or elements of a
system).

• Integration and visualization capabilities (i.e. data management framework, integration or


seismic risk and graphical projection of the results)

1
1.2 Definition

A Fragility curve is used as a probabilistic way of assessing the vulnerability of bridge or any
other structures under a seismic event, generally represented a conditional probability of meeting
or exceeding limit state (i.e. collapse) for a given some ground motion intensity (i.e. peak ground
acceleration (PGA))

1.3 Categorization of Vulnerability curve:


 Empirical vulnerability curve (Based on post-earthquake survey)
 Judgmental vulnerability curve (Based on expert opinion)
 Analytical vulnerability curve (Based on damage distributions simulated from the
analyses)
 Hybrid vulnerability curve (Based on analytical or judgment based relationships with
observational data and experimental results)

1.4 Methods for seismic fragility analysis


Structural reliability is described by the fragility curve Pf (α). The fragility curve expresses
the probability that the structure fails under seismic load level α. The parameter a is also called
seismic Intensity Measure (IM). The seismic hazard curve H (α) expresses the annual probability
of exceeding seismic load intensity a. The resulting failure probability Pf; total is then obtained by
virtue of the total probability theorem by integration of the fragility with respect to seismic hazard.
This can be expressed as

Where

An appropriate selection of the seismic intensity measure reduces the epistemic uncertainties in
the seismic reliability analysis. The most suitable intensity measure for a reliability analysis
depends also on the type of structure, as some parameters perform better for structures where the
maximum response determine the failure (e.g. steel structures) while other parameters fit better for
structures where degradation plays a dominant role (e.g. masonry structures).

In what follows, the notion of fragility curve is recalled and the different methods for the evaluation
2
of fragility curves, analyzed in this paper, are described with more detail.

The most general expression of a fragility curve as a conditional probability reads

Pf (α) = P (DM > DS|IM = α)

1.5 Ground Motion Uncertainty


Seismologists usually define strong ground motion as the strong earthquake shaking that occurs
close to (less than about 50 km from) a causative fault.
The prediction of ground-motion levels to be expected at a site is one of the key elements of seismic
hazard assessment. This prediction is commonly achieved using empirical ground motion
prediction equations (GMPE) derived through regression analysis on selected sets of
instrumentally recorded strong-motion data. These equations relate a predicted variable
characterizing the level of shaking, most commonly the logarithm of a peak ground-motion
parameter (e.g., PGA, PGV) or response spectral ordinate (SA, PSA, PSV, SD), to a set of
explanatory variables describing the earthquake source, wave propagation path and site conditions.
The explanatory variables usually include the earthquake magnitude, M, a factor describing the
style-of-faulting of the causative event, a measure of the source-to-site distance, R, and a parameter
characterizing the site class.

1.6 Earthquake Ground motion

For the design of structures to resist earthquakes, it is necessary to have some knowledge of ground
motions. Earthquakes motion can be recorded in terms of ground displacement, velocity or
acceleration. During earthquakes, the ground movement is very complex, producing translations
in any general direction combined with rotations about arbitrary axes. Modern strong motion
accelerographs are designed to record three translational components of ground acceleration,
switching on by themselves automatically once an earthquake ground motion reaches a certain
threshold level, 31 usually about 0.005 g.

Any strong motion instrumentation essentially requires the following components:

1. Vibrating machine

2. Vibration transducer

3
3. Signal conversion

4. Display / recording.

5. Data analysis.

The first complete record of strong ground motion was obtained during the 1940 El-Centro earthquake in
California shown in Figure below

Figure 1. Example of strong motion earthquake record (N-S component of El-Centro, 1940
earthquake).

1.7 Ground Motion Characteristics

Several earthquake parameters are reported in the literature for quantitatively describing the
various characteristics of the ground motion. These cover characteristics such as amplitude of
motion, frequency content of motion, duration of motion, etc.

4
1.7.1 Amplitude Parameters

Peak Ground Acceleration The earthquake time history contains several engineering
characteristics of ground motion and maximum amplitude of motion is one of the important
parameter among them. The PGA is a measure of maximum amplitude of motion and is defined
as the largest absolute value of acceleration time history.

Peak Velocity is the largest absolute value of velocity time history. It is more sensitive to the
intermediate frequency components of motion and characterizes the response to structures that are
sensitive to intermediate range of ground motions, e.g. tall buildings, bridges, etc.

Peak Displacement reflect the amplitude of lower frequency components in ground motion.
Accurate estimation of these parameters is difficult as the errors in signal processing and numerical
integration greatly affect the estimation of amplitude of displacement time history.

1.7.2 Frequency Content Parameters

Earthquake ground motion is an amalgamation of harmonic motion with a range of frequency


components and amplitudes. Some of these are discussed below

Response Spectra A plot showing the maximum response induced by ground motion in single
degree of freedom oscillators of different fundamental time periods having same damping is
known as response spectrum. The maximum response could be spectral acceleration, spectral
velocity or spectral displacement.

The spectral velocity and spectral


acceleration are related by SA =w0SV

Where SA is the spectral acceleration, SV


spectral velocity and w0 the natural circular
frequency. Similarly, it can be shown that
SV =w0 SD where SD is the spectral
displacement.
Figure 2. Design response spectral shape suggested
by BIS (IS 1893-2002).

5
Fourier Spectra The plot of Fourier amplitude of input time history vs time period or frequency
is known as Fourier spectrum. Since the Fourier analysis provides both amplitude and phase
angles, Fourier spectra could either be a Fourier amplitude spectrum or Fourier phase spectrum.
The Fourier amplitude spectrum provides inputs on the frequency content of the motion and helps
to identify the predominant frequency of motion. Similar to observation made in the case of
response spectra, Fourier spectra of two time histories could be vastly different.

Power Spectra

Frequency contents of ground motion can also be represented by a power spectrum or power
spectral density function. The ordinate of power spectra is calculated as

Where G(w) is the spectral density at natural circular frequency, w ; Td the duration of time history
and n C the Fourier amplitude at natural circular frequency, w .

1.8 Uncertainties

Uncertainty is sometimes classified into two categories, although the validity of this categorization
is open to debate. These categories are prominently seen in medical applications.

Aleatoric uncertainty is also known as statistical uncertainty, and is representative of unknowns


that differ each time we run the same experiment. For example, a single arrow shot with a
mechanical bow that exactly duplicates each launch (the same acceleration, altitude, direction and
final velocity) will not all impact the same point on the target due to random and complicated
vibrations of the arrow shaft, the knowledge of which cannot be determined sufficiently to
eliminate the resulting scatter of impact points. The argument here is obviously in the definition
of "cannot". Just because we cannot measure sufficiently with our currently available measurement
devices does not preclude necessarily the existence of such information, which would move this
uncertainty into the below category.

Epistemic uncertainty is also known as systematic uncertainty, and is due to things one could in
principle know but doesn't in practice. This may be because a measurement is not accurate, because
the model neglects certain effects, or because particular data has been deliberately hidden. An
example of a source of this uncertainty would be the in an experiment designed to measure the
6
acceleration of gravity near the earth's surface. The commonly used gravitational acceleration of
9.8 m/s^2 ignores the effects of air resistance, but the air resistance for the object could be
measured and incorporated into the experiment to reduce the resulting uncertainty in the
calculation of the gravitational acceleration.

7
CHAPTER 2

REVIEW OF LITERATURE

Baker et al. (2001) estimate the fragility functions using dynamic structural analysis. This paper
discusses the applicability of statistical inference concepts for fragility function estimation,
describes appropriate fitting approaches for use with various structural analysis strategies, and
studies how to fit fragility functions while minimizing the required number of structural analyses.
There are a number of procedures for performing nonlinear dynamic structural analyses to collect
the data for estimating a fragility function. One common approach is incremental dynamic analysis
(IDA), where a suite of ground motions are repeatedly scaled in order to find the IM level at which
each ground motion causes collapse.

Incremental dynamic analysis

He suggests a lognormal cumulative distribution function is often used to define a fragility function

𝑥
𝑙𝑛𝜃
P (C|IM=X) = Φ (log⁡( ))
𝛽

Where P(C | IM = x) is the probability that a ground motion with IM = x will cause the structure to
collapse, Φ ( ) is the standard normal cumulative distribution function (CDF), θ is the median of
the fragility function (the IM level with 50% probability of collapse) and β is the standard deviation
of lnIM

8
Fragility function parameters can be estimated from this data by taking logarithms of each ground
motion’s IM value associated with onset of collapse, and computing their mean and standard
deviation.

Calculation of Intensity measure and Cumulative distribution plot. It is to use counted fractals of
the IMi values, rather than their moments, to estimate θ and β.

Figure 3. a) Example incremental dynamic analyses results, used to identify IM values associated
with collapse for each ground motion. b) Observed fractions of collapse as a function of IM, and a
fragility function.

Multiple stripes analysis

This method is used for discrete IM level and different ground motions are used at each IM level.
Due to the differing ground motions used at each IM level, the analyst may not observe strictly
increasing fractions of collapse with increasing IM, even though it is expected that the true
probability of collapse is increasing with IM. The structural analysis results provide the fraction of
ground motions at each IM level that cause collapse. The appropriate fitting technique for this type
of data is to use the method of maximum likelihood.

9
At each intensity level IM = xj, the structural analyses produce some number of collapses out of a
total number of ground motions. Assuming that observation of collapse or no-collapse from each
ground motion is independent of the observations from other ground motions, the probability of
observing zj collapses out of nj ground motions with IM = xj is given by the binomial distribution

When analysis data is obtained at multiple IM levels, we take the product of the binomial
probabilities at each IM level to get the likelihood for the entire data set

We then substitute value of pj, so the fragility parameters are explicit in the likelihood function

It is equivalent and numerically easier to maximize the logarithm of the likelihood function

A fragility function obtained using this approach is displayed in Figure 2

Figure 4. a) Example MSA analysis results. Analyses causing collapse are plotted at Peak Story
Drift Ratios of greater than 0.08, and are offset from each other to aid in visualizing the number
of collapses .b) Observed fractions of collapse as a function of IM, and a fragility function

10
This paper discusses the applicability of statistical inference concepts for fragility function fitting,
identifies appropriate fitting approaches for different data collection strategies, and illustrates how
one might fit fragility functions using an approach that minimizes the required number of structural
analyses. First, incremental dynamic analysis and multiple stripe analysis approaches for data
collection were discussed, and corresponding statistically appropriate methods for fragility
function fitting were described

Kumar et al. (2014) describes the inelastic analysis of RC framed structures subjected to
earthquake excitation; the pushover (non-linear static) analysis is in forefront compared to time
history (nonlinear dynamic) analysis. Hence, the paper proposes the methodology in a probabilistic
manner to assess the seismic risk/ performance of RC (Reinforced Concrete) building by
considering uncertainties based on pushover analysis due to non-existence of code of practices in
Indian context. Thus, the methodology may be used as guidelines for seismic risk evaluation of
building structure.

The design factor of safety, FS, is the ratio of resistance, R (i.e., capacity), the maximum load
under which a system can perform its intended function, and the resultant stress, S (i.e., load or
demand), placed on a system under design conditions:

F=R (Capacity)/S (Demand)

The margin of safety, Z = R – S = Capacity – Demand. If demand exceeds capacity, Z < 0, the
system is in a failure state. The condition Z = 0 is the limiting state.

Uncertainty and Risk

For the purposes of natural hazard risk analyses, risk can be defined as R = P × I × E × V, where
exposure E and the vulnerability V.

Uncertainty can be described as either aleatory or epistemic. Aleatory uncertainty is attributed to


natural variability over space and time or to inherent randomness. Epistemic uncertainty is
uncertainty attributed to a lack of knowledge. Epistemic uncertainties can, in principle, be reduced
by obtaining more information, although in practice it may be very difficult, expensive, or
physically impossible to do so. Uncertainty in a quantity/reality is often a mixture of aleatory and

11
epistemic uncertainty. If the strength of materials is also a function of environmental variables
such as temperature, humidity, or moisture content, these are inherently variable and the
uncertainty in structural capacity is both aleatory and epistemic. Similarly, uncertainty about what
loads will be exerted on a structure can be either aleatory or epistemic.

Reliability, r, is the probability that the structure is in a survival state: r=1−pfThe term pf, is the
probability of failure calculated from a joint probability density function for resistance and load:
𝐩𝐟= 𝐩 (𝐳≤𝟎) =𝐩 (𝐅𝐒≤𝟏)

The methodology described in this paper are as follows.

Step 1: Analytical Building Model

The analytical model used in this paper is In the model, the nonlinear behavior is represented using
the concentrated plasticity concept with rotational springs or distributed plasticity concept where
the plastic behavior occurs over a finite length.

Figure 5. Overall Geometry of the Structure

12
Step 2: Pushover Analysis

Conventional pushover analysis is


carried out to determine the ground
motion intensity the building must be
subjected to for it to displace to a
specified inter-story drift ratio using
SAP/E-TABS software’s of latest
version.

Figure 6. General procedure of the probabilistic CSM

Step 3: Define Damage State Indicator Levels (Failure Criteria and Performance Limit States)

The top storey displacement is often used by many researchers as a failure criterion because of the
simplicity and convenience associated with its estimation.

Table 1. Damage State Indicator Levels

Slight Damage Hinge yielding at one floor


Moderate Damage Yielding of beams or joints at more than one floor
Extensive Damage Hinge rotation exceeds plastic rotation capacity
Collapse Structural Instability

Realistic damage limit states are required in the development of reliable fragility curves, which are
employed in the seismic risk assessment packages for mitigation purposes.

13
Step 4: Incorporate the Uncertainty

Conducting a vulnerability analysis of reference RC building located in Zone-IV/Zone V of IS:


1893-2002 with uncertainty.

Step 5: Building Fragility Curves

Develop an analytical fragility estimates to quantify the seismic vulnerability of RC frame building

The methodology proposed and outlined in this article is for the probabilistic seismic risk
evaluation of building structure, used as a guideline for seismic vulnerability assessment based on
non-linear static analysis (pushover analysis) using any sophisticated software

Zentner (2016) describes In this paper, different fragility analysis methods are described and their
advantages and disadvantages are discussed: (i) the safety factor method, in which the fragility
curve is estimated based on safety margins with respect on an existing deterministic design; the
numerical simulation method, in which the parameters of the fragility curve are obtained by (ii)
regression analysis or (iii) maximum likelihood estimation from a set of nonlinear time history
analysis at different seismic levels; (iv) the incremental dynamic analysis method where a set of
accelerograms is scaled until failure. These four fragility analysis methods are applied to determine
fragility curves for the 3-storey reinforced concrete shear wall building of the SMART2013
benchmark project.

Safety factor method

The starting point for the safety factor method is an existing deterministic seismic design of the
structure. On this basis, seismic margins, also called the safety factors, are evaluated in order to
estimate a realistic median capacity of the structure.

Epistemic uncertainty and aleatory (random) variability are distinguished and characterized by two
different logarithmic standard deviations (log std), denoted βU and βR. This yields the following
expression for seismic capacity: A = Am. €R. €U

Where Am is the median capacity and €U and €R are two lognormal random variables with median
equal to one and log std, βU (epistemic) and βR (aleatory), respectively.

14
The median capacity is obtained as the median value of the product safety factor F multiplied by
the design base earthquake. F is the product of different margin factors:

F = FS. Fµ. FRS

Where FS = strength factor and Fl = Energy dissipation factor and FRS = Structural Response
factor.

The latter can be subdivided in following components: FRS = FSS. Fd. FM. FMC. FEC. FSSI. FGMI

Where FSS = Spectral shape factor,

Fd is the damping factor;

FM is the modelling factor;

FMC is the possible margin related to the structural mode combination rules;

FEC is the factor related to the combination of horizontal earthquake components,

FSSI is the factor for soil-structure-interaction and

FGMI the factor for spatial incoherency of ground motion.

The fragility curve is then expressed as a function of confidence level Q, which represents the
epistemic uncertainties, as follows:

This expression allows the evaluation of the median fragility curve for Q = 0.5 and its confidence
intervals by choosing adequate values of Q (e.g. Q = 0.95 for 95% confidence intervals). The so
called composite fragility curve is the mean curve. It is obtained by considering βC, given by the
square root of the sum of the squares

βC = √𝛽𝑈 2 + 𝛽𝑅 2

15
Allows writing the HCLPF (High Confidence Low Probability of Failure) capacity, defined as the
capacity for which failure probability is only 5% with 95% confidence, as:

Application of this method

Structure and numerical model

The fragility analysis methods were evaluated by performing analysis for a three-storey reinforced
concrete structure, which was analyzed within the SMART2013 international benchmark project
(Richard et al., 2016). For the purpose of the benchmark, a reduced scale model representative of
a nuclear building was designed, built and tested on the AZALEE shaking table at CEA Saclay.
The structure is a 1:4 scaled mock-up representing a simplified half part of a nuclear electrical
building. The dimensions were 3.10 × 2.55 m in plan and the height was 3.65 m.

The lateral load resisting systems is provided by shear walls with minor and major openings and a
thickness of 0.1 m. The main eigenmodes and frequencies of the linear numerical model with
equivalent foundation impedances are rocking in x and y-directions at 5.74 Hz and 6.39 Hz,
translation in z-direction at 14.82 Hz and torsion at 17.61 Hz. The design spectrum has a zero
period acceleration of 0.2 g and corresponds to a magnitude 5.5 and distance 10 km earthquake.

Figure 7. Mock-up of SMART2013 benchmark project (left) and the numerical model (right).

16
It is defined with respect to the elastic stiffness matrix, and determined so as to obtain 3.5%
damping ratio at 5 Hz and 21 Hz. Uncertainty is accounted for by modelling the damping and
stiffness of the supporting springs and the structural damping ratio by lognormal random variables
with median equal to the best-estimate value. The coefficients of variation are 1% and 2%,
respectively for the spring stiffness and damping and 20% for the structural damping ratio.

Latin Hypercube sampling is used to generate sets of 50 model parameters. According to


benchmark specification the correlation coefficient between the spring stiffness and damping was
set to 0.80. The ISD ratio threshold for extended damage level proposed for the SMART2013
benchmark was Ds. = h/100, where h = 1.2 m is the interstorey height. This corresponds to the life
safety level (drift ratio 1%) specified in FEMA356. In order to consider structural failure and
compare the results to the Safety Factor method, a second ISD threshold, Ds=2 h/100, is considered
here. The second threshold value corresponds to collapse of concrete walls according to FEMA356
(drift ratio 2%).

Seismic input and intensity measures

The ground motion time histories are artificial accelerograms generated with the stochastic ground
motion simulation model based on a spectrum-compatible power spectral density. The target
spectrum is a design spectrum associated to a magnitude M = 6.5 and distance D = 9 km event for
a rock site.

Figure 8. Response spectra of simulated accelerograms (blue), their median and ±1σ values
(magenta) compared to the target values (red).

17
This artificial ground motion model allows for simulating time histories with realistic spectral
shape that are in agreement with major ground motion IM. According to the benchmark
specifications and in agreement with the experimental tests, only horizontal seismic load is
considered. A set of 50 pairs of synthetic horizontal ground motions (directions x, y), simulated to
match the scenario target spectrum in median and ±1σ values, is the seismic input considered for
the numerical fragility analysis.

Figure 9. Examples of time histories (acce 1 – acce 3 from top to bottom, left) and corresponding
response spectra (Pseudo Spectral Acceleration – PSA, right).

Table 2 Safety factors and resulting fragility curve parameters.

Factor Mean value βU βR


F 6.60 0.28 0.45
FS 2.74 0.04 0.22
Fµ 2.41 0.04 0.14
FRS 1.00 0.27 0.36
FSS 1.00 0.20 0.24
FD 1.00 0.00 0.00
FM 1.00 0.00 0.25
FMC 1.00 0.15 0.00
FEC 1.00 0.11 0.11
FSSI 1.00 0.00 0.00

18
The HCLPF capacity is evaluated as 0.40 g. The resulting composite and median fragility curves
as well as 5% and 95% confidence intervals are shown in Fig. 10.

Figure 10. Family of fragility curves based on the Safety Factor method

Kwona et al. (2005) focuses on establishing the relative effect of strong-motion variability and
random structural parameters on the ensuing vulnerability curves. Moreover, the effect of the
selection of statistical models used to present simulation results is studied. A three story ordinary
moment resisting reinforced concrete frame, previously shake-table tested, is used as a basis for
the fragility analysis. The analytical environment and the structural model are verified through
comparison with shaking-table test results. The selection of ground motion sets, definition of limit
states, statistical manipulation of simulation results, and the effect of material variability are
investigated.

Four aspects of the derivation process mainly affect vulnerability curves as shown in Fig. 1. These
are structure, hazard definition, simulation method, and vulnerability analysis. Each component
can be divided into a number of sub-tasks. By definition, vulnerability analysis is probabilistic
since each of the constituent components is uncertain. Some of the uncertainties are inherently

19
random while others are consequences of lack of the knowledge. In this study, only uncertainties
in material properties and input motion are considered.

Figure 11. Flow chart for the derivation of analytical vulnerability curves

Uncertainties in capacity and demand

In the derivation of vulnerability functions, a probabilistic approach is adopted owing to


uncertainties in the hazard (demand) as well as structural supply (capacity). Some of those
uncertainties stem from factors that are inherently random (referred to as aleatoric uncertainty), or
from lack of knowledge (referred to as epistemic uncertainty)

Material uncertainty

Concrete strength

In this study, it is assumed that there is no variability of concrete strength since the structure is a
low-rise building of limited volume that would have been constructed in a relatively short period.
Thus, a coefficient of variation of 18.6% is adopted. The specified concrete strength (or design
strength) of the considered structure was 24 MPa. In-place concrete strength is assumed 1.40 times
20
larger than the specified strength (33.6 MPa). Normal distribution assumption is adopted for the
concrete strength.

Steel strength

The mean values and coefficient of variation of the yield strength were 337 MPa (48.8 ksi) and
10.7%, respectively. The probability distribution of modulus of elasticity of Grade 40 reinforcing
steel followed a normal distribution with a mean value 201,327 MPa (29,200 ksi) and a coefficient
of variation of 3.3%.

Input motion uncertainty

Selection of Ground motion

The first three sets of ground motions are based on the ratio of peak ground acceleration to peak
ground velocity (a/v).

Ground motions were classified in the following ranges:

Low: a/v < 0.8g/m s−1

Intermediate: 0.8g/m s−1 ≤ a/v ≤ 1.2g/m s−1

High: 1.2g/m s−1 < a/v.

21
The average response spectra of selected ground motion sets show distinctive difference among
each ground motion set. The remaining six sets used in this study are artificial ground motions.

Figure 12. Average response spectrum of selected ground motion sets.

Random variables sampling

For ground motion set low, normal, and high a/v, ten ultimate concrete strengths, fc, and ten steel
yield strengths, Fy , were generated and a full combination of material strengths are used, resulting
in a total of 100 frames. For ground motion set U-1, U-2, and U-3 based on Uplands profile, 100
concrete and steel strengths are generated. From the analysis result of these frames, the effect of
sample size is investigated.

Limit state definition

Three limit states are defined here serviceability’, ‘damage control’, and ‘collapse prevention’,
limit states, respectively. In this study, the local damage of individual structural element, such as
beam, column, or beam–column joint, is not accounted for. Only interstory drift is used as a global
measure of damage. 0.57%, 1.2% and 2.3% for the selected three limit states, as shown in Figure
below. In the figure below in case of column represent the bottom element of the 1st story columns
as indicated in figure. It is assumed that theses limit states are also applicable to the 2nd and 3rd
stories

22
Figure 13. Definition of limit states: (a) serviceability state, (b) damage control state, (c) collapse
state.

23
Simulation and vulnerability curve derivation

The total simulation analysis of the frame using ground motion set ‘low’, ‘normal’, and ‘high’ a/v
ratio required 456 h using a Pentium IV-2.65 GHz PC for a total of 23,000 response history
analyses. Thus, the structure is in the collapse state if the maximum interstory drift is larger than
2.3%. The probability where a frame maximum interstory drift will be larger than a certain limit
state is calculated as below:

P(ISD > ISDLimit) = P(ISD > ISDLimit | E1) · P(E1) + P(ISD > ISDLimit | E2) · P(E2)

= P(ISD > ISDLimit | E1) · P(E1) +1.0 · P(E2).

For structures with ISD < 2.3%, and the structures were considered to have collapsed if ISD >2.3%.
This assumption provides conservative results. Fig. 9 shows a sample vulnerability curve for the
0.57% ISD limit state using intermediate a/v ratio ground motion sets.

Figure 14. Derived vulnerability curves using various methods – normal a/v ratio, limit state =
ISD 0.57%.

24
CHAPTER 3

Objective of Study

Vulnerability curves play a critical role in regional seismic risk and loss estimation as they give
the probability of attaining a certain damage state when a structure is subjected to a specified
demand. Such loss estimations are essential for the important purposes of disaster planning and
formulating risk reduction policies.

The objectives of the present study are briefly outlined as follows:

 The main objective of this study is to present vulnerability curves of a reinforced concrete
structure subjected to various ground motion sets and to investigate the effects of material
uncertainties and selected ground motion sets on the obtained vulnerability curves.
 To study the efficient analytical fragility function fitting using dynamic structural analysis.
 To evaluate best suitable method for determining the vulnerability curves for different civil
engineering structures.
 To study the different characteristics of Earthquake strong motion
 To investigate and identify trends and derive conclusions on the relative sensitivity to input
motion and the randomness of material properties.
 To improve the Reliability based design guidelines.
 To study global seismic reliability analysis of building structures Based on system-level
limit states.
 To study Probabilistic Seismic Risk Evaluation of Building Structure Based on Pushover
Analysis

25
CHAPTER 4

Scope of Present Study

The present study is motivated to develop a reliability based design guideline for structures
supported on pile foundation incorporating the effect of SSI. This will ensure a sustainable design
for piled raft supported structures which may better perform during moderate to strong
earthquakes.

 The scope of this study is to present vulnerability curves of a reinforced concrete


structure subjected to various ground motion sets and to investigate the effects of
material uncertainties and selected ground motion sets on the obtained vulnerability
curves.
 The present study can help in making different easy and more reliable methods for
generate vulnerability curves civil engineering structures.
 It can help in knowing the different uncertainties come into consideration while
designing Reliability based design.
 Improvement of guidelines regarding seismic design of structures and reliability
based design of structures.
 It may help in improvement seismic risk assessment study for structures several
other aspects of vulnerability curve derivation are investigated such as the selection
of ground motion duration, definition of limit states, selection of damping
parameter, and statistical manipulation.

26
CHAPTER 5

Summary and Future Scope of Study

In this study, vulnerability curves for different structure are derived different set of ground
motions. The effects of ground motion input and material variability are the main focus of the
study, alongside other supporting aspects of vulnerability curve derivation such as selection of
representative structure, verification of analytical model and analysis environment, effect of
explicit damping, scaling of ground motion sets, significant duration of ground motion, and limit
state definition. Hereafter, some of the findings as well as comments on the methods adopted in
this study are reiterated below:

 The material uncertainty affects the structural response under low ground motion
intensities since the concrete elastic modulus is related to the ultimate strength.
 At high ground motion levels, material properties contribute to the variability in structural
response, but the resulting variability is much smaller than that due to ground motion
variability.
 Input motion characteristics have a most significant effect on vulnerability curves.
Therefore, meticulous consideration is required when ground motions are selected.
 A general overview of previous works on different methods to generate vulnerability
curves.

The procedures and discussions in this study, however, provide an insight into the vulnerability
derivation of low ductility structures.

27
Suggestion of Future Research

 A study can be performed on a group of piles to see the structural responses and also the
variation in responses with respect to single pile can be studied in a broad manner.
 Further a study on fragility curve by giving an earth quake motion to the group pile
embedded in sand as well as clay can be performed to generate probability of failure.
 Also work on transformation uncertainty parameter can be performed.
 A study can be made considering the uncertainty in loading parameter.

28
REFERENCES
1. Jack W. Baker M.EERI (2001). “Efficient analytical fragility function fitting using
dynamic structural analysis.” Earthquake Engineering & Structural Dynamics, 34(10),
1193–1217.
2. Aslani, H., and Miranda, E. (2005). “Fragility assessment of slab-column connections in
existing non-ductile reinforced concrete buildings.” Journal of Earthquake Engineering,
9(6), 777–804.
3. C. M. Ravi Kumar*, K. S. Babu Narayan, Reddy D. Venkat (2014). “Methodology for
Probabilistic Seismic Risk Evaluation of Building Structure Based on Pushover Analysis.”
Open Journal of Architectural Design, 2(2):13-20.

4. Bolotin, V. V. (1993). “Seismic Risk Assessment for Structures with the Monte Carlo
Simulation”, Probabilistic Engineering Mechanics, 8 (3), 69-177.
5. Calvi, G. M., Pinho, R., Magenes, G., Bommer, J. J., Restrepo-Vélez, L. F., and Crowley,
H. (2006). “Development of seismic vulnerability assessment methodologies over the past
30 years.” ISET journal of Earthquake Technology, 43(3), 75–104.
6. Jalayer and J. L. Beck (2007).” Effects of two alternative representations of ground-motion
uncertainty on probabilistic seismic demand assessment of structures.”Earthquake Engng
Struct. Dyn. 2008; 37:61–79.
7. R.P. Kennedy and M.K. Ravindra (1983).” Seismic fragilities for nuclear power plant risk
studies.” Nuclear Engineering and Design 79 (1984) 47-68.
8. Oh-Sung Kwon and Amr Elnashai (2005). “The effect of material and ground motion
uncertainty on the seismic vulnerability curves of RC structure.” Engineering Structures
28 (2006) 289–303.s
9. Irmela Zentner ,Max Gündel, Nicolas Bonfils (2016).” Fragility analysis methods: Review
of existing approaches and application.” Nuclear Engineering and Design.
10. Kennedy, R.P., Ravindra, M.K., 1984.”Seismic fragilities for nuclear power plant risk
studies”. Nuclear. Engineering Design. 79 (1), 47–68.
11. Kennedy, R.P., Cornell, C.A., Campbell, R.D., Kaplan, S., Perla, H.F., 1980. Probabilistic
seismic safety study of an existing nuclear power plant. Nucl. Eng. Des. 59 (2), 315–338.

29

You might also like