You are on page 1of 29

LECTURE 4-5-6

CIVE 632 – Reliability


Reliability-Based
Based Design of Civil
Systems

Random Variables

Spring 2008 Dr. Shadi Najjar AUB

Definition of a Random Variable


‰ Random Variable: A device to formalize and facilitate
description of an event and computation of its probability

S
E1

a b x
E 1 = (a < X ≤ b )
Ex: Settlement
X = random variable (uncertain value)
x = realization of variable (known value)

1
Discrete Random Variable
Probability Mass Function

03
0.3
0.2

0.1

0
1 1.5 2 2.5 3
Remediation Cost ($ Million)

p X (x) = P(X = x) = probability mass function

Discrete Random Variable


Cumulative Distribution Function
1.0
0.8
0.6
0.4
0.2
0
1 2 3 4
Remediation Cost ($ Million)
FX ( x) = P (X ≤ x) = cumulative distribution function
FX (a ) = ∑ pX ( xi ) and P(a < X ≤ b) = FX (b ) − FX (a )
all x i ≤a

2
Continuous Random Variables
Probability Density Function

fX(x)dx

1 15
1.5 2 dx 2.5
25 3
Remediation Cost ($ Million)
fX ( x) = probability density function
P( X = x) = fX (x) dx

Continuous Random Variable


1.0
0.8
06
0.6
0.4
0.2
0
1 2 3 4
Remediation Cost ($ Million)

FX ( x) = P (X ≤ x) = cumulative distribution function



F X (a) = ∫ f X (x)dx FX ( −∞) = 0. 0 F X (∞) = ∫ f X (x)dx = 1. 0
all x≤ a −∞
b
P(a < X ≤ b) = F X (b) − F X (a ) = ∫ f X (x)dx
a

3
fX ( x)

Random Variables Descriptors


Central Value

MEAN: μ = E(X) = ∫ xf X (x)dx


−∞

MEDIAN: FX ( x m ) = 0. 5 or x m = FX−1 ( 0.5)

MODE: x% = most likely value [maximum fX ( x) ]

Random Variables Descriptors


Example
Mode
x
fX ( x) = 0≤ x ≤ 2
2
2
x2 x3
2
4 Median
E(X) = ∫ dx = = Mean
0
2 6 0
3
x
x x2 0 0.5 1 1.5 2
F X (x) = ∫2 dx =
4 x
0

x 2m
FX ( x m ) = = 0. 5 ⇒ xm = 2
4
x% = 2

4
Random Variables Descriptors
Dispersion

Var (X) = E[(X − μ) ] = ∫ (x − μ) f X (x)dx
2 2
VARIANCE:
−∞

STANDARD DEVIATION: σ = Var ( X )

σ
COEFFICIENT OF VARIATION: δ=
μ

Random Variables Descriptors


Example
x
fX ( x) = 0≤ x ≤ 2
2
2 2
x x 4 2μx 3 μ 2 x 2
Var (X) = ∫ (x − μ) dx = − + = 0. 222
2

0
2 8 6 4 0

σ = 0. 222 = 0. 471

0. 471
δ= = 0. 354
43

0 0.5 1 1.5 2
x

5
Random Variables Descriptors
nth Moment of a Random Variable

( )= ∫x
n n
E(X f X ((x)dx
)
−∞
First Moment = E(X)

Second Moment = E ( X2 )

nth Central Moment of a Random Variable


Second Central Moment = Variance = E[( X − μ ) 2 ] = E ( X2 ) − E ( X ) 2

Third Central Moment = Skewness = E[( X − μ ) 3 ]

Mathematical Expectation

E[g(X)] = ∫ g(x)f X (x)dx
−∞

Normal Distribution (Gaussian)


X = N( μ , σ )
x μ⎞ 2
− ⎛
1 x−
1 2⎝ σ ⎠
f X (x) = e −∞<x <∞
σ 2π

f X (x)
( )

-6 -4 -2 0 2 4 6 8
x

6
Normal Distribution (Gaussian)
X = N( μ , σ )
N(1,1) N(3,1)

N(1,2)

-6 -4 -2 0 2 4 6 8
x

Normal Distribution (Gaussian)

X = N(μ , σ )
1 ⎛ x− μ ⎞ 2
1 −
⎝ σ ⎠
f X (x) = e 2 −∞<x <∞
σ 2π
⎛ x − μ ⎟⎞
F X (x) = Φ⎜ evaluated from tables or spreadsheets
⎝ σ ⎠

Φ((0) = 0.5
Φ
Φ(1) = 0. 841
Φ (− s) = 1 − Φ (s) by symmetry
Φ(2) = 0. 977
Φ(3) = 0. 9987

7
Normal Distribution (Gaussian)
P(μ − σ < X ≤ μ + σ ) = Φ(1) − Φ(−1) = 0. 841 − (1 − 0. 841) = 0. 682
P(μ − 2σ < X ≤ μ + 2σ ) = Φ(2 ) − Φ(− 2) = 0. 977 − (1 − 0. 977) = 0. 954
P(μ − 3σ < X ≤ μ + 3σ ) = Φ( 3) − Φ(−3) = 0. 9987 − (1 − 0. 9987 ) = 0. 997

0.68 N(0,1)
Standard Normal Distribution

-4 -3 -2 -1 0 1 2 3 4
x

Normal Distribution (Gaussian)


P(μ − σ < X ≤ μ + σ ) = Φ(1) − Φ(−1) = 0. 841 − (1 − 0. 841) = 0. 682
P(μ − 2σ < X ≤ μ + 2σ ) = Φ(2 ) − Φ(− 2) = 0. 977 − (1 − 0. 977) = 0. 954
P(μ − 3σ < X ≤ μ + 3σ ) = Φ( 3) − Φ(−3) = 0. 9987 − (1 − 0. 9987 ) = 0. 997

0.96 N(0,1)
Standard Normal Distribution

-4 -3 -2 -1 0 1 2 3 4
x

8
Normal Distribution (Gaussian)

X = N(μ , σ )

E ( X) = μ, Var (X ) = σ 2

Special Characteristic of a Normal Distribution

If you add up a number of random variables together, the


sum of the variables will follow a Normal Distribution
provided that the number of variables is large.

Central Limit Theorem

Central Limit Theorem


0.5 0.5 0.5 0.5

1 2 1 2
Flow River 1 Flow River 2

0.5

0.25 0.25

2 3 4
Flow River 1 + Flow in River 2

9
Normal Distribution (Example)
L = Load on Column: N(200,30) kips

⎛ 260 − 200 ⎞ ⎛ 230 − 200 ⎞


P(230 < L ≤ 260) = Φ⎜ ⎟ − Φ⎜ ⎟
0.02 ⎝ 30 ⎠ ⎝ 30 ⎠
= Φ(2) − Φ(1) = 0. 977 − 0. 841
0.015 = 0.136

0.01 N(200,30)

0.005

0
100 125 150 175 200 225 250 275 300
Load (Kips)

Normal Distribution (Example)


What design Resistance, R, gives a 99% reliability.

0.02

P(L<R) = 0.99
0.015

0.01
R=?
0.005

0
100 125 150 175 200 225 250 275 300
Load (Kips)

10
Normal Distribution (Example)
What design Resistance, R, gives a 99% reliability.

P(−∞
( < L ≤ R ) = 0. 99

⎛ R − 200 ⎞ ⎛ R − 200 ⎞
Φ⎜ ⎟ − Φ(−∞) = Φ ⎜ ⎟ − 0 = 0. 99
⎝ 30 ⎠ ⎝ 30 ⎠

R − 200
0.02
= Φ −1 (0. 99) = 2. 33
30 0.015

0.01 R=?
R = 2. 33(30) + 200 = 270 kips
0.005

0
100 125 150 175 200 225 250 275 300

Load (Kips)

Lognormal Distribution
X = LN(λ,ζ)
2
1 ⎛ ln (x) −λ ⎞

1 2⎝ ζ ⎠
f X (x) = e 0≤x<∞
xζ 2π

f X ((x))

0 5 10
x

11
Lognormal Distribution
LN(1,0.5)

LN(2,0.5)

LN(1,1)

X = LN(λ, ζ)
0 5 10 15 20
x −
1 ⎛ ln (x) −λ ⎞
2

1 2⎝ ζ ⎠
f X (x) = e
N(1,0.5) N(2,0.5) xζ 2π
⎛ ln (x) − λ ⎞
N(1,1) F X (x) = Φ⎜ ⎟
⎝ ζ ⎠

-3 -2 -1 0 1 2 3 4 5

y = ln(x)

Lognormal Distribution
X = LN(λ,ζ)
E( X) = eλ +ζ 2 , Var
V (X) = E(X) 2 (eζ − 1)
2 2

x m = eλ
Special Characteristic of a Lognormal Distribution

If you multiply a number of random variables together, the


product of the variables will follow a Lognormal Distribution
provided that the number of variables is large.

Central Limit Theorem

12
Lognormal Distribution
Relationship to Normal Distribution

If X = LN(λ,ζ) , then Y = N (λ, ζ ) where Y = ln(X )

ζ2
λ = E ⎡⎣ln ( X ) ⎤⎦ = ln ( μ X ) −
2

(
ζ 2 = Var ⎡⎣ln ( X ) ⎤⎦ = ln 1 + δ2 )
Note: ζ is approximately equal to δ for δ < 0.3

Lognormal Distribution (Example)


L = Load on Column: LN (μ = 200 k , σ = 30k )

0.02

0.015

0.01 N (μ = 200 k , σ = 30k )

0.005

0
100 125 150 175 200 225 250 275 300

13
Lognormal Distribution (Example)
L = Load on Column: LN (μ = 200 k , σ = 30k )

0.02
LN (μ = 200 k , σ = 30k )
0.015

N (μ = 200 k , σ = 30k )
0.01

0.005

0
100 125 150 175 200 225 250 275 300

Lognormal Distribution (Example)

ζ = ln(1 + 0.152 ) = 0.149 λ = ln(200) − 0. 5(0.149)2 = 5. 29

0.020000

0.015000 LN (μ = 200 k , σ = 30k )

0.010000
LN(5.29,0.149)
0.005000

0.000000
100 125 150 175 200 225 250 275 300
Load (Kips)

14
Lognormal Distribution (Example)
L = Load on Column: LN (μ = 200 k , σ = 30k )
⎛ ln(260) − 5. 29 ⎞ ⎛ ln (230) − 5. 29 ⎞
P(230 < L ≤ 260) = Φ⎜ ⎟ − Φ⎜ ⎟
⎝ 0 149
0.149 ⎠ ⎝ 0 149
0.149 ⎠
0.020000 = Φ(1. 82) − Φ(0. 994) = 0.966 − 0. 839
= 0.127 (compare with 0.136 for normal )
0.015000 LN(5.29,0.149)

0.010000

0.005000

0.000000
100 125 150 175 200 225 250 275 300
Load (Kips)

Lognormal Distribution (Example)


What design Resistance, R, gives a 99% reliability.

0.02

P(L<R) = 0.99
0.015

0.01
R=?
0.005

0
100 125 150 175 200 225 250 275 300
Load (Kips)

15
Lognormal Distribution (Example)
What design Resistance, R, gives a 99% reliability.

P(−∞ < L ≤ R ) = 0. 99
P(

⎛ ln(R) − 5. 29 ⎞ ⎛ ln(R) − 5. 29 ⎞
Φ⎜ ⎟ − Φ(−∞) = Φ⎜ ⎟ − 0 = 0. 99
⎝ 0.149 ⎠ ⎝ 0.149 ⎠
0.02

ln ((R ) − 5.29
= Φ −1 (0.99)
(0 99) = 2.
2 33 0.015

0.149 0.01 R=?


0.005
R = e 2.33( 0.149) +5.29 = 281 kips
0
100 125 150 175 200 225 250 275 300

Load (Kips)

Bernoulli Sequence

1 2 3 ... n

S S S S
F F F F
A sequence of N Trials

A
Assumptions
ti
‰ Two possibilities in each trial
‰ Probability of success in each trial is a constant
‰ Occurrence between trials are statistically independent

16
Bernoulli Sequence (Examples)

Earthquakes
T i l (Years)
Trials (Y )
Success (No earthquake)
Failure (Earthquake)

Pile Driving
g
Trials (Meters)
Success (No Boulders)
Failure (Boulders)

Bernoulli Sequence

No of Occurrences (X) in n Trials:

Binomial Probability Distribution

n!
P( X = x n, p) = px (1 − p) n −x x = 0,1,..., n
x!( n − x )!

E ( X) = np Var ( X ) = np(1 − p)

Where p = probability of occurrence in a given trial

17
Binomial Distribution (Example)

GIVEN p = P(annual max. wave height > 30 m) = 0.01

1. P(1 occ. in 20 years) = P( X = 1 n = 20, p = 0. 01)


20!
P( X = 1 n = 20, p = 0. 01) = (0. 01)1 (1 − 0. 01)20−1 = 0.17
1!(20 − 1)!

2 P(at
2. t 1 occ. iin 20 years)) = 1 − P( X = 020, 0. 01)
P( t least
l
20!
P( X = 020, 0. 01) = (0. 01)0 (1 − 0. 01)20−0 = 0. 82
1!(20 − 0)!

P(at least 1 occ. in 20 years) = 1 - 0.82 = 0.18

Bernoulli Sequence

No of Trials between Occurrences

Geometric Probability Distribution

P(T = t p) = (1 − p )t −1 p t = 0,1,..., n
1 1− p
E(T) = = Re turn Period Var (T) =
p p2

Where p = probability of occurrence in a given trial

18
Geometric Distribution (Example)

GIVEN p = P(annual max. wave height > 30 m) = 0.01

1. P(next occ. is 10 years from now) = P(T = 10 p = 0.01)


P(T = 10 p = 0.01) = (1 − 0.01)10−1(0.01) = 0. 0091

1
2. Return Period: E (T ) = = 100 years
0. 01

3. P(no occ. in Return Period) = P( X = 0100, 0. 01)


100!
P( X = 0100, 0. 01) = (0. 01) 0 (1 − 0. 01)100 −0 = 0. 37
1!(100 − 0)!

4. P(at least 1 occ. in Return Period) = 1 - 0.37 = 0.63

Bernoulli Distribution (Example)


Suppose that the design criteria for an offshore
structure is a 0.01 probability that the design wave
height is exceeded during a 30 year design life.

1. Determine the yearly prob. of occurrence p for the design wave


P( X = 030, p) = (1 − p)30 = 1 − 0.01 = 0.99
∴ p = 0. 00034

Assuming
A i ththe maximum
i wave height
h i ht is
i a random
d
variable that follows a normal distribution with a mean
of 20 m and a standard deviation of 3 m; H = N(20,3)

2. Determine the maximum wave height that has a


probability p = 0.00034 of being exceeded in a given year?

19
Bernoulli Distribution (Example)
Assuming the maximum wave height is a random
variable that follows a normal distribution with a mean
of 20 m and a standard deviation
de iation of 3 m
m; H = N(20
N(20,3)
3)

2. Determine the maximum wave height that has a


probability p = 0.00034 of being exceeded in a given year?

⎛ h − 20 ⎞
P(H > h design ) = 1−
1 Φ⎜ ⎟ = 0.
0 00034
⎝ 3 ⎠

∴ h design = 20 + 3Φ −1 (0. 99966) = 20 + 3(3. 40) = 30. 2 m

Poisson Process

t
Continuous Process

Assumptions
‰ Events can occur at random at any point in time or space
‰ Probability of occurrence in a given interval is proportional
to interval length
‰ Occurrence of an event in a given interval is independent
of occurrences in any other non-overlapping intervals

Poisson Process is Bernoulli Sequence at Limit as n →∞

20
Poisson Process

No of Occurrences (X) in Time (or Space) t:

Poisson Probability Distribution

(νt)x −νt
P(X = x t, ν) = e x = 0,1,...
x!
E ( X) = ν t Var ( X ) = νt

where ν = average occurrence rate

Poisson Distribution (Example)

GIVEN ν = mean rate of flaws in a weld = 1 per 10 m = 0.1 per m

−1
1. P(5 flaws in 100 m) = P( X = 5 t = 100 m, ν = 0.1 m )
[(0.1)(100)]5 −(0 .1)(100)
P( X = 5 t = 100 m, ν = 0.1 m−1 ) = e = 0. 038
5!
Expected No of flaws in 100 m
0.15
E ( X 10, 0. 1) = (0.1)(100) = 10
0.1

0.05

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
x = Number of Occurrences

21
ν

Poisson Process

Time between Occurrences

Exponential Probability Distribution

f T ( t ) = ν e− νt t≥0
FT (t ) = 1 − e −νt
1 1
E(T) = = Re turn Period Var(T) =
ν ν2
Where ν = mean rate of occurrence

Exponential Distribution (Example)

Earthquakes with M>5.0 occur once in 5 years


GIVEN
on average: ν = 0.2 per year

1
1. Return Period: E (T ) = = 5 years
0. 2
2. P(no occ. in Return Period) = P( X = 05, 0. 2) = P( T > 5 0. 2)
P(T > 50. 2 ) = e −( 0.2) (5) = 0. 37

3. P(at least 1 occ. in Return Period) = 1 - 0.37 = 0.63

22
Poisson Process (Example)
Time between strong earthquakes is exponentially
distributed with a mean time of 50 years. The collapse
probability
p y for a major
j bridge
g is 25 percent
p during
ga
strong earthquake. Assume that bridge collapse events
are statistically independent between earthquakes.

1. If no earthquake has occurred for the past 10 years, what is


the probability that one occurs in the next 10 years?

X = number of earthquake occurrences


1 −1
E(T) = 50 yr = ∴ ν = 0.02 yr
ν
[(0.02)(10)]1 −( 0.02)(10)
P(X = 110,0. 02) = e = 0.164
1!

Poisson Process (Example)

2. What is the probability that there is no bridge failure


due to earthquakes in the next 10 years?

S = no collapse in 10 years

P(S) = ∑ P(S x events )P(x events )
x=0

(νt) x − νt
P(S) = ∑ (1− p )
(1− p )( νt ) − νt −νtp
e =e e =e
x

x=0 x!

P(S) = e −( 0.02)(10)( 0.25) = 0. 95

23
Multiple Random Variable
RV1 RV2

Duration, x Productivity, y Number of Joint PMF


(hr) (%) Observations pX,Y(x,y)
(x y)
6 50 2 0.014
6 70 5 0.036
6 90 10 0.072
8 50 5 0.036
8 70 30 0.216
8 90 25 0.180
10 50 8 0 058
0.058
10 70 25 0.180
10 90 11 0.079
12 50 10 0.072
12 70 6 0.043
12 90 2 0.014
139 1.0

Multiple Random Variables


Joint PMF

0.25
0.20
0.15
pX,Y(x,y) 0.10
0.05
0.00
90
6 70
8 50
10 Y (%)
X (hr) 12

24
Multiple Random Variables

Joint Probability Density Function


fX ,Y (x , y)

Joint Cumulative Distribution Function


a b

X Y (x , y ) = P(X ≤ a ∩ Y ≤ b) =
F X,Y ∫ ∫ f X,Y
X Y (x,
(x y )dydx
−∞ −∞

Marginal Probability Distributions


Marginal Probability Density Functions

f X (x) = ∫ f X,Y (x,
(x y)dy
−∞
Marginal PMF

Marginal PMF 0.5

x pX(x)
0.4
6 0.122
8 0.432 0.3
pX(x)
10 0.317
0.2
12 0.129
1.0 0.1

0.0
6
8
10
12
X (hr)

25
Marginal Probability Distributions
Marginal Probability Density Functions

f Y (y ) = ∫ f X,Y (x, y)dx M
Marginal
i l PMF
−∞
0.5

0.4

Marginal PMF 0.3


pY(y)
y p(y) 02
0.2
50 0.180
70 0.475 0.1

90 0.345 0.0
1.0 50
70
90
Y (%)

Conditional Probability Distributions


fX ,Y ( x, y)
fX Y (x y) = or fX ,Y ( x, y) = fX Y ( x y )fY (y )
fY (y )
f ( x, y)
fY X (y x) = X,Y or fX,Y ( x, y) = fY X ( y x )fX ( x)
fX (x )
Conditional PMF

0.5
Conditional PMF
pY|x(y|x = 8 hr)

0.4
y pY|x(y|x=8)
50 0 0833
0.0833 03
0.3
70 0.500
0.2
90 0.417
1.0 0.1

0.0
50
70
90
Y (%)

26
Multiple Random Variables
Covariance and Correlation

COV( X, Y) = E[( X − μ X )(Y − μY )] = E(XY) − E( X)E(Y )


∞ ∞
E(XY) = ∫ ∫ xyf X,Y (x , y )dydx
−∞ −∞

COV( X, Y)
ρ= − 1. 0 ≤ ρ ≤ 1. 0
σXσY

Statistical Independence

fX,Y (x , y) = fX ( x)fY ( y)
COV( X, Y) = 0. 0 and ρ = 0. 0

Multiple Random Variables

27
Bivariate Normal Distribution
⎡ 1 ⎛ y − μ Y − ρ(σ Y σ X )( x− μ X ) ⎞ ⎤ ⎡
2
1 ⎛ x −μ X ⎟⎞ ⎤
2
⎢ 1
− ⎜
2
⎟ ⎥ − ⎜
⎢ 1 e 2⎝ σX ⎠ ⎥
e ⎝ σ Y 1−ρ ⎠
2
, (x , y ) = ⎢
f X,Y ⎥⎢
2πσ Y 1 − ρ ⎥
⎥ 2πσ X
2
⎢ 2
⎣ ⎦⎣ ⎦

fX ,Y (x , y) = fY X (y x) fX (x)

Note: fY X (y x) is normal with

E(Y X = x) = μ Y + ρ(σ Y σX )(x − μ X )

Var (Y X = x) = σ 2Y (1 − ρ2 )

Bivariate Normal Distribution


μX 0
σX 1
μY 0
σY 1
ρX,Y -0.8 0.30
0.25
0.20
fX,Y(x,y) 0.15
0.10 3.
0.05 00
1.
0.00 50
0.
-3.00

00
-1.50

-1
0.00

. 50 y
1.50

x -3
.
3.00

00

28
Example
Wave height (H) and period (Tp) are modeled using a
bivariate normal distribution:
μ H = 10 m , σ H = 2 m , μ T p = 15 s , σ T p = 1.
1 5 s , ρ = 0.
04

With no information concerning the wave period,


⎛ 14 − 10 ⎟⎞
P(H > 14 m ) = 1 − Φ⎜ = 1 − Φ(2.00) = 1 − 0.977 = 0. 023
⎝ 2 ⎠
What if the period is observed: T p = 18 s

E(H T p = 18 s) = 10 + 0. 4(2 1.5 )(18 − 15) = 11. 6m


Var ( H T p = 18 s ) = 2 (1 − 0. 4 ) = 3. 36 m
2 2 2

⎛ 14 − 11. 6 ⎞
P(H > 14 m ) = 1 − Φ⎜ ⎟ = 1 − Φ(1.31) = 1 − 0. 905 = 0. 095
⎝ 3. 36 ⎠

29

You might also like