You are on page 1of 23

Continuous Random Variables

The probability that a continuous ran-


dom variable, X, has a value between
a and b is computed by integrating its
probability density function (p.d.f.) over
the interval [a, b]:
Z b
P(a ≤ X ≤ b) = fX (x)dx.
a
A p.d.f. must integrate to one:
Z ∞
fX (x)dx = 1.
−∞
Continuous Random Variables (contd.)
The probability that the continuous ran-
dom variable, X, has any exact value, a,
is 0:
P(X = a) = lim P(a ≤ X ≤ a + ∆x)
∆x→0
Z a+∆x
= lim fX (x)dx
∆x→0 a
= 0.
In general

P(X = a) 6= fX (a).
Probability Density
The probability density at a multiplied
by ε approximately equals the probabil-
ity mass contained within an interval of
ε width centered on a:
Z a+ε/2
ε fX (a) ≈ fX (x)dx
a−ε/2
≈ P(a − ε/2 ≤ X ≤ a + ε/2)
Cumulative Distribution Function
A continuous random variable, X, can
also be defined by its cumulative distri-
bution function (c.d.f.):
Z a
FX (a) = P(X ≤ a) = fX (x)dx.
−∞
For any c.d.f., FX (−∞) = 0 and FX (∞) =
1. The probability that a continuous ran-
dom variable, X, has a value between
a and b is easily computed using the
c.d.f.:
Z b
P(a ≤ X ≤ b) = fX (x)dx
Za b Z a
= fX (x)dx − fX (x)dx
−∞ −∞
= FX (b) − FX (a).
Cumulative Distribution Function (contd.)
The p.d.f., fX (x), can be derived from
the c.d.f., FX (x):

d x
Z
fX (x) = fX (s)ds
dx −∞
dFX (x)
= .
dx
Joint Probability Densities
Let X and Y be continuous random vari-
ables. The probability that a ≤ X ≤
b and c ≤ Y ≤ d is found by integrat-
ing the joint probability density func-
tion for X and Y over the interval [a, b]
w.r.t. x and over the interval [c, d] w.r.t.
y:
P(a ≤ X ≤ b, c ≤ Y ≤ d)
Z bZ d
= fXY (x, y)dydx.
a c
Like a one-dimensional p.d.f., a two-
dimensional joint p.d.f. must also in-
tegrate to one:
Z ∞ Z ∞
fXY (x, y)dxdy = 1.
−∞ −∞
Marginal Probability Densities
Z ∞
fX (x) = fXY (x, y)dy
Z−∞

fY (y) = fXY (x, y)dx
−∞

Conditional Probability Densities

fXY (x, y)
fX|Y (x | y) =
fY (y)
fXY (x, y)
= R∞
−∞ f XY (x, y)dx

fXY (x, y) = fX|Y (x | y) fY (y)


Exponential Density
A constant fraction of a radioactive sam-
ple decays per unit time:
d f (t) 1
= − f (t).
dt τ
What fraction of the radioactive sample
will remain after time t?
− τt
d(e ) 1 −t
=− e τ
dt τ
Exponential Density (contd.)
− τt
The function, f (t) = e , satisfies the
differential equation, but it does not in-
tegrate to one:
Z ∞ ∞
− τt − τt
e dt = −τe

0 0
− ∞τ
= τe +τ
= τ.
R∞
So that −∞ f T (t)dt = 1, we divide f (t)
by τ:
1 −t
fT (t) = e τ .
τ
Exponential Density (contd.)
The time, T , at which an atom of a ra-
dioactive element decays is a continu-
ous random variable with the following
p.d.f.:
1 −t
fT (t) = e τ .
τ
The corresponding c.d.f. is:
Z a1
− τt
FT (a) = e dt
0 τ
a
− τt
= −e
0
− aτ
= 1−e .
The c.d.f. gives the probability that an
atom of a radioactive element has al-
ready decayed.
Example
The lifetime of a radioactive element
is a continuous random variable with
the following p.d.f.:
1 −t
fT (t) = e 100 .
100
The probability that an atom of this el-
ement will decay within 50 years is:
Z 50
1 −t
P(0 ≤ t ≤ 50) = e 100 dt
0 100
= 1 − e−0.5
= 0.39.
Exponential Density (contd.)
The half-life, λ, is defined as the time
required for half of a radioactive sam-
ple to decay:

P(0 ≤ t ≤ λ) = 1/2.
Since
Z λ
1 −t
P(0 ≤ t ≤ λ) = e 100 dt
0 100
1 λ
− 100
= 1−e
= 1/2,
it follows that λ = 100 ln 2 or 69.31 years.
1
line 1
line 2
0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
0 10 20 30 40 50 60 70 80 90 100

1t
1 − 10 1
Figure 1: Exponential p.d.f., 10 e , and c.d.f., 1 − e− 10 t .
Memoryless Property of the Exponential
If X is an exponentially distributed ran-
dom variable, then

P(X > s + t|X > t) = P(X > s).


Proof:
P(X > s + t, X > t)
P(X > s + t|X > t) =
P(X > t)
P(X > t|X > s + t)P(x > s + t)
=
P(X > t)
P(X > s + t)
= .
P(X > t)
Since P(X > t) = 1 − P(X ≤ t),
P(X > s + t) 1 − (1 − e−(s+t)/τ)
=
P(X > t) 1 − (1 − e−t/τ)
= e−s/τ
= P(X > s).
Memoryless Property of the Exponential
In plain language: Knowing how long
we’ve already waited doesn’t tells us any-
thing about how much longer we are
going to have to wait, e.g., for a bus.
Expected Value
Let X be a continuous random variable.
The expected value of X, is defined as
follows:
Z ∞
hXi = µ = x fX (x)dx
−∞
Variance
The variance of X is defined as the ex-
pected value of the squared difference
of X and hXi:
D E Z ∞
[X − hXi]2 = σ2 = [x − hXi]2 fX (x)dx
−∞
Gaussian Density
A random variable X with p.d.f.,
1 −(x−µ)2/2σ2
fX (x) = √ e
σ 2π
is called a Gaussian (or normal) ran-
dom variable with expected value, µ, and
variance, σ2.
Expected Value for Gaussian Density
Let the p.d.f., fX (X), equal
1 −(x−µ)2/(2σ2)
√ e .
2πσ
The expected value, hXi, can be derived
as follows:
Z ∞
1 −(x−µ)2/(2σ2)
hXi = √ xe dx.
2πσ −∞
Expected Value for Gaussian Density (contd.)
Writing x as (x − µ) + µ:
Z ∞
1 −(x−µ)2/(2σ2)
hXi = √ (x − µ)e dx
2πσ Z −∞
1 ∞
−(x−µ)2/(2σ2)
+ µ√ e dx.
2πσ −∞
The first term is zero, since (after sub-
stitution of u for x − µ) it is the integral
of the product of an odd and even func-
tion. The second term is µ, since
Z ∞ Z ∞
1 −(x−µ)2/(2σ2)
√ e dx = fX (x)dx = 1.
2πσ −∞ −∞
Consequently,
hXi = µ.
C.d.f. for Gaussian Density
Because the Gaussian integrates to one
and is symmetric about zero, its c.d.f.,
FX (a), can be written as follows:
Z a  1 R0
− f (x)dx if a < 0
fX (x)dx = 12 Raa X
−∞ 2
+ 0 fX (x)dx otherwise.
Equivalently, we can write:
( R |a|
Z a 1
− f (x)dx if a < 0
fX (x)dx = 12 R0|a| X
−∞
2
+ 0 fX (x)dx otherwise.
C.d.f. for Gaussian Density (contd).
R |a|
To evaluate 0 fX (x)dx, recall that the
Taylor series for ex is:

xn
e =∑ .
x

n=0 n!
The Taylor series for a Gaussian is there-
fore:
1 −x2/2
fX (x) = √ e

∞ −x2/2 n

1
= √ ∑
2π n=0 n!
1 ∞ (−1)n x2n
= √ ∑ n
.
2π n=0 n! 2
C.d.f. for Gaussian Density (contd.)
Consequently:
Z |a| 1
Z |a| ∞ (−1)n x2n
0
fX (x)dx = √
2π 0
∑ n! 2n dx
n=0
|a|
1 ∞ (−1)n x2n+1
= √ ∑ n
2π n=0 n! 2 (2n + 1)

0
1 ∞ (−1)n |a|2n+1
= √ ∑ n
.
2π n=0 n! 2 (2n + 1)
1
line 1
line 2
0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
-3 -2 -1 0 1 2 3

Figure 2: Gaussian p.d.f. and c.d.f., µ = 0 and σ2 = 1, computed using Taylor


series.

You might also like