Professional Documents
Culture Documents
Continuous Random Variables: Drnaheard
Continuous Random Variables: Drnaheard
Dr N A Heard
Contents
1 Continuous Random Variables 2
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Probability Density Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Continuous Distributions 8
3.1 Uniform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Exponential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.3 Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1
1 Continuous Random Variables
1.1 Introduction
Definition
Suppose again we have a random experiment with sample space S and probability measure
P.
Recall our definition of a random variable as a mapping X : S R from the sample space
S to the real numbers inducing a probability measure PX ( B) = P{ X 1 ( B)}, B R.
Comments
A connected sequence of comments:
One consequence of this definition is that the probability of any singleton set B = { x }, x
R is zero for a continuous random variable,
PX ( X = x ) = PX ({ x }) = 0.
This in turn implies that any countable set B = { x1 , x2 , . . .} R will have zero probabil-
ity measure for a continuous random variable, since PX ( X B) = PX ( X = x1 ) + PX ( X =
x2 ) + . . ..
This automatically implies that the range of a continuous random variable will be un-
countable.
This tells us that a random variable cannot be both discrete and continuous.
Examples
The following quantities would typically be modelled with continuous random variables.
They are measurements of time, distance and other phenomena that can, at least in theory, be
determined to an arbitrarily high degree of accuracy.
Note that there are obvious ways to discretise all of the above examples.
2
1.2 Probability Density Functions
pdf as the derivative of the cdf
From (1), we notice the cumulative distribution function (cdf) for a continuous random
variable X is therefore given by
Z x
FX ( x ) = f X (t)dt, x R,
This expression leads us to a definition of the pdf of a continuous r.v. X for which we
already have the cdf; by the Fundamental Theorem of Calculus we find the pdf of X to be
given by
d
f X (x) = FX ( x ) or FX0 ( x ).
dx
Properties of a pdf
Since the pdf is the derivative of the cdf, and because we know that a cdf is non-decreasing,
this tells us the pdf will always be non-negative.
So, in the same way as we did for cdfs and discrete pmfs, we have the following checklist
to ensure f X is a valid pdf.
pdf:
1. f X ( x ) 0, x R;
Z
2. f X ( x )dx = 1.
Interval Probabilities
Suppose we are interested in whether continuous a r.v. X lies in an interval ( a, b]. From the
definition of a continuous random variable, this is given by
Z b
PX ( a < X b) = f X ( x )dx.
a
3
f(x)
P(a<X<b)
a b
x
Further Comments:
Besides still being a non-decreasing function satisfying FX () = 0, FX () = 1, the cdf
FX of a continuous random variable X is also (absolutely) continuous.
For a continuous r.v. since x, P( X = x ) = 0, Fx ( x ) = P( X x ) P( X < x ).
For small x, f X ( x )x is approximately the probability that X takes a value in the small
interval, say, [ x, x + x ).
Since the density (pdf) f X ( x ) is not itself a probability, then unlike the pmf of a discrete
r.v. we do not require f X ( x ) 1.
From (1) it is clear that the pdf of a continuous r.v. X completely characterises its distri-
bution, so we often just specify f X .
Example
Suppose we have a continuous random variable X with probability density function
2
cx , 0 < x < 3
f (x) =
0, otherwise
for some unknown constant c.
Questions
1. Determine c.
2. Find the cdf of X.
3. Calculate P(1 < X < 2).
Solutions
1. To find c: We must have
Z 3
x3
Z 3
2
1= f ( x )dx = cx dx = c = 9c
0 3 0
1
c= .
9
4
0, x<0
Rx Rx u2 x3
2. F ( x ) = f ( u ) du = 9 du = 27 , 0x3
0
1, x > 3.
8 1 7
3. P(1 < X < 2) = F (2) F (1) = = 0.2593.
27 27 27
pdf cdf
1.0
1.0
0.8
0.8
0.6
0.6
f(x)
F(x)
0.4
0.4
0.2
0.2
0.0
0.0
1 0 1 2 3 4 5 1 0 1 2 3 4 5
x x
1.3 Transformations
Transforming random variables
Suppose we have a continuous random variable X and wish to consider the transformed
random variable Y = g( X ) for some function g : R R, s.t. g is continuous and strictly
monotonic (so g1 exists).
0 d 1
Note g1 (y) = g (y) is positive since we assumed g was increasing.
dy
If we had g monotonic decreasing, Y y X g1 (y) and
5
2 Mean, Variance and Quantiles
2.1 Expectation
E( X )
For a continuous random variable X we define the mean or expectation of X,
Z
X or EX ( X ) = x f X ( x )dx.
Linearity of Expectation
Clearly, for continuous random variables we again have linearity of expectation
E( aX + b) = aE( X ) + b, a, b R,
E{ g( X ) + h( X )} = E{ g( X )} + E{ h( X )}.
2.2 Variance
Var( X )
The variance of a continuous random variable X is given by
Z
X2 or VarX ( X ) = E{( X X )2 } = ( x X )2 f X ( x )dx.
Z
VarX ( X ) = x2 f X ( x )dx 2X
= E( X 2 ) {E( X )}2 .
6
2.3 Quantiles
QX ()
Recall we defined the lower and upper quartiles and median of a sample of data as points
(,,)-way through the ordered sample.
For [0, 1] and a continuous random variable X we define the -quantile of X, QX (),
as
QX () = min{q : FX (q) = }.
q R
If FX is invertible then
QX () = FX1 ().
In particular the median of a random variable X is QX (0.5). That is, the median is a solution
1
to the equation FX ( x ) = .
2
Example (continued)
Again suppose we have a continuous random variable X with probability density function
given by
2
x /9, 0 < x < 3
f (x) =
0, otherwise.
Questions
1. Calculate E( X ).
2. Calculate Var( X ).
Solutions
Z 3
x2 x4 34
Z 3
1. E( X ) = x f ( x )dx = x. dx = = = 2.25.
0 9 36 0 36
Z 3
x2 x5 35
Z 3
2 2 2
2. E( X ) = x f ( x )dx = x . dx = = = 5.4.
0 9 45 0 45
So Var( X ) = E( X 2 ) {E( X )}2 = 5.4 2.252 = 0.3375.
x3
3. From earlier, F ( x ) = , for 0 < x < 3.
27
r
1 x3 1 3 27 3
Setting F ( x ) = and solving, we get = x = =
3
2.3811 for the
2 27 2 2 2
median.
7
3 Continuous Distributions
3.1 Uniform
U( a, b)
Suppose X is a continuous random variable with probability density function
1
b a , a<x<b
f (x) =
0, otherwise,
Example: U(0,1)
6 f (x) 6 F(x)
1 1
- -
0 1 x 0 1 x
Notice from the cdf that the quantiles of U(0,1) are the special case where Q() = .
0 1 a b
X Y
Proof:
ya
We first observe that for any y ( a, b), Y y a + (b a) X y X .
ba
ya ya
From this we find Y U( a, b), since FY (y) = P(Y y) = P X = FX =
ba ba
ya
.
ba
8
Mean and Variance of U( a, b)
To find the mean of X U( a, b),
Z b
x2
Z b
1
E( X ) = x f ( x )dx = x. dx =
a ba 2( b a ) a
b2 a2 (b a)(b + a) a+b
= = = .
2( b a ) 2( b a ) 2
( b a )2
Similarly we get Var( X ) = E( X 2 ) E( X )2 = 12 , so
a+b ( b a )2
= , 2 = .
2 12
3.2 Exponential
Exp()
Suppose now X is a random variable taking value on R+ = [0, ) with pdf
f ( x ) = ex , x 0,
Then X is said to follow an exponential distribution with rate parameter and we write
X Exp().
F ( x ) = 1 ex , x 0.
1 1
= , 2 = .
2
=1
0.8
0.6
f(x)
= 0.5
0.4
0.2
= 0.2
0.0
0 2 4 6 8 10
9
Example: Exp(0.2), Exp(0.5) & Exp(1) cdfs
1.0
0.8
0.6
F(x)
0.4
0.2
0.0
0 2 4 6 8 10
For x, s > 0, consider the conditional probability P( X > x + s| X > s) of the additional
magnitude of an exponentially distributed random variable given we already know it is greater
than s.
Well
P( X > x + s )
P( X > x + s | X > s ) = ,
P( X > s )
which, when X Exp(), gives
e( x +s)
= ex ,
P( X > x + s | X > s ) =
es
again an exponential distribution with parameter .
So if we think of the exponential variable as the time to an event, then knowledge that we
have waited time s for the event tells us nothing about how much longer we will have to wait
- the process has no memory.
Examples
Exponential random variables are often used to model the time until occurrence of a ran-
dom event where there is an assumed constant risk () of the event happening over time, and
so are frequently used as a simplest model, for example, in reliability analysis. So examples
include:
the time to failure of a component in a system;
the time until we find the next mistake on my slides;
the distance we travel along a road until we find the next pothole;
the time until the next jobs arrives at a database server;
10
Link with Poisson Distribution
Notice the duality between some of the exponential r.v. examples and those we saw for a
Poisson distribution. In each case, number of events has been replaced with time between
events.
Claim:
If events in a random process occur according to a Poisson distribution with rate then the
time between events has an exponential distribution with rate parameter .
Proof:
Suppose we have some random event process such that x > 0, the number of events
occurring in [0, x ], Nx , follows a Poisson distribution with rate parameter , so Nx Poi(x ).
Such a process is known as an homogeneous Poisson process. Let X be the time until the first
event of this process arrives.
P( X > x ) P( Nx = 0)
(x )0 ex
=
0!
x
=e .
and hence X Exp(). The same argument applies for all subsequent inter-arrival times.
3.3 Gaussian
N(, 2 )
Suppose X is a random variable taking value on R with pdf
( x )2
1
f (x) = exp ,
2 22
( t )2
Z x
1
F(x) = exp dt.
2 22
11
Example: N(0,1), N(2,1) & N(0,4) pdfs
0.4
N(0,1) N(2,1)
0.3
0.2
f(x)
N(0,4)
0.1
0.0
4 2 0 2 4 6
0.4
0.2
0.0
4 2 0 2 4 6
N(0, 1)
Setting = 0, = 1 and Z N(0, 1) gives the special case of the standard normal, with
simplified density
1 z2
f (z) (z) = e 2 .
2
Statistical Tables
Since the cdf, and therefore any probabilities, associated with a normal distribution are not
analytically available, numerical integration procedures are used to find approximate proba-
bilities.
12
In particular, statistical tables contain values of the standard normal cdf (z) for a range of
values z R, and the quantiles 1 () for a range of values (0, 1). Linear interpolation is
used for approximation between the tabulated values.
But why just tabulate N(0, 1)? We will now see how all normal distribution probabilities
can be related back to probabilities from a standard normal distribution.
X N(, 2 ) aX + b N( a + b, a2 2 ), a, b R.
(Note that the mean and variance parameters of this transformed distribution follow from the
general results for expectation and variance of any random variable under linear transforma-
tion.)
X
X N(, 2 ) N(0, 1).
Table of
z (z) z (z) z (z) z (z)
0 .5 0.9 .816 1.8 .964 2.8 .997
.1 .540 1.0 .841 1.9 .971 3.0 .998
.2 .579 1.1 .864 2.0 .977 3.5 .9998
.3 .618 1.2 .885 2.1 .982 1.282 .9
.4 .655 1.3 .903 2.2 .986 1.645 .95
.5 .691 1.4 .919 2.3 .989 1.96 .975
.6 .726 1.5 .933 2.4 .992 2.326 .99
.7 .758 1.6 .945 2.5 .994 2.576 .995
.8 .788 1.7 .955 2.6 .995 3.09 .999
13
Using Table of
First of all notice that (z) has been tabulated for z > 0.
This is because the standard normal pdf is symmetric about 0, so (z) = (z). For the
cdf , this means
(z) = 1 (z).
Similarly, if Z N(0, 1) and we want P( Z > z), then for example P( Z > 1.5) = 1 P( Z
1.5) = 1 (1.5).
(1.96) 97.5%.
So with 95% probability an N(0, 1) r.v. will lie in [1.96, 1.96] ( [2, 2]).
(2.58) = 99.5%.
PX ( X [ z1/2 , + z1/2 ]) = 1 ,
and hence [ z1/2 , + z1/2 ] gives a (1 ) probability region for X centred around
.
This can be rewritten as
PX (| X | z1/2 ) = 1 .
Example
An analogue signal received at a detector (measured in microvolts) may be modelled as a
Gaussian random variable X N(200, 256).
1. What is the probability that the signal will exceed 240V?
2. What is the probability that the signal is larger than 240V given that it is greater than
210V?
Solutions:
240 200
1. P( X > 240) = 1 P( X 240) = 1 = 1 (2.5) 0.00621.
256
14
1 200
240
P( X > 240) 256
2. P( X > 240| X > 210) = = 0.02335.
P( X > 210) 200
1 210 256
in=1 Xi n
lim .
n n
X
lim ,
n / n
in=1 Xi
where X = .
n
Or finally, for large n we have approximately
2
X N , ,
n
or
n
Xi N n, n2 .
i =1
We note here that although all these approximate distributional results hold irrespective
of the distribution of the { Xi }, in the special case where Xi N(, 2 ) these distributional
results are, in fact, exact. This is because the sum of independent normally distributed random
variables is also normally distributed.
Example
Consider the most simple example, that X1 , X2 , . . . are i.i.d. Bernoulli( p) discrete random
variables taking value 0 or 1.
Then the { Xi } each have mean = p and variance 2 = p(1 p).
15
But now, by the Central Limit Theorem (CLT), we also have for large n that approximately
n
Xi N n, n2 N(np, np(1 p)).
i =1
So for large n
0.25
0.20
0.20
0.15
0.15
p(x)
f(x)
0.10
0.10
0.05
0.05
0.00
0.00
0 2 4 6 8 10 0 2 4 6 8 10
x x
0.08
0.06
0.06
0.04
0.04
p(x)
f(x)
0.02
0.02
0.00
0.00
20 30 40 50 60 70 80 20 30 40 50 60 70 80
x x
16
Binomial(1000,) pmf & N(500,250) pdf
Binomial(1000,0.5) N(500,250)
0.025
0.025
0.020
0.020
0.015
0.015
p(x)
f(x)
0.010
0.010
0.005
0.005
0.000
0.000
400 450 500 550 600 400 450 500 550 600
x x
So suppose X was the number of heads found on 1000 tosses of a fair coin, and we were
interested in P( X 490).
Using the binomial distribution pmf, we would need to calculate P( X 490) = p X (0) +
p X (1) + p X (2) + . . . + p X (490) (!) ( 0.27).
However, using the CLT we have approximately X N(500, 250) and so P( X 490)
490 500
= (0.632) = 1 (0.632) 0.26.
250
Log-Normal Distribution
Suppose X N(, 2 ), and consider the transformation Y = e X .
0 1
Then if g( x ) = e x , g1 (y) = log(y) and g1 (y) = .
y
Then by (2) we have
{log(y) }2
1
f Y (y) = exp , y > 0,
y 2 22
and we say Y follows a log-normal distribution.
LN(0,4)
1.2
1.0
0.8
f(x)
LN(0,1)
0.6
0.4
0.2
LN(2,1)
0.0
0 2 4 6 8 10
17