EE 178

EE 178

Lecture notes 8 Continuous Random Variables
Lecture Outline • Continuous Random Variables • Probability Density Functions (PDF’s) • Examples: Uniform, Exponential, Laplacian, Gaussian • Expectation • Cumulative Distribution Functions (CDF’s) Reading: Bertsekas & Tsitsiklis 3.1-3.3
Continuous Random Variables 8–1

Continuous Random Variables

• Suppose now a RV X is defined on an experiment in such a way that X can take on a continuum of values, e.g., spin the fair wheel and read out the number pointed to; measure the voltage across a heated resistor, measure the phase of a random sinusoid, the light intensity at a specific point in a (traditional analog) photograph, . . . • How describe probabilities of interesting events? • Elementary events like {X = x} = {ω : X(ω) = x} are not interesting as they have 0 probability in nontrivial examples. Analogous to summing a probability mass function over points in a set to find a total probability of the set containing those points, integrate a density of probability over a set to find a total probability of the set containing the points.
Continuous Random Variables 8–2

EE 178

EE 178

PDF’s
Analogous to mass density in physics (integrate mass density to get total mass). Idea: Assume a probability density function (pdf) fX (x) for which P (F ) =
F

Observations: • Since Ω is an event with probability 1, we must have i.e., a pdf must be normalized. f (x) dx Ω X = 1,

• Since for any event F , F fX (x) dx must be a probability, it must be nonnegative. This can only be true if fX (x) ≥ 0 for all x. • The previous two properties are satisfied by pmf’s, but unlike pmf’s a pdf is not the probability of anything. Hence, in particular, it need not be less than 1 for all x! • Can relate pdf to a probability using mean value theorem of calculus: Fix x and choose a very small δ > 0. Then provided fX is a reasonably well-behaved function,
x+δ

fX (x) dx

for any event F . Thus for example, for any interval (a, b] = {x : a < x ≤ b} the probability can be computed as
b

P ((a, b]) =
a

fX (x) dx

P ((x, x + δ]) =
x

fX (α) dα

≈ fX (x)δ
Continuous Random Variables 8–3 Continuous Random Variables 8–4

EE 178

EE 178

Thus a pdf evaluated at a point and multiplied by a differential length is approximately the probability of being within an interval of that length containing the point. (This reinforces the necessity of having fX be nonnegative.) • Continuous uniform pdf:

Examples

fX (x) =

c if x ∈ [a, b) 0 otherwise,

Normalization requires
∞ b

1 =
−∞ b

fX (x) dx =
a

c dx

= c
a
Continuous Random Variables 8–5 Continuous Random Variables

dx = c(b − a)
8–6

EE 178

EE 178

fX (x)

1 b−a

 

• Piecewise constant pdf:

fX (x) =
a b

ci 0

if x ∈ [ai, bi), i = 1, 2, . . . , N otherwise,

¡

x

Can be used to approximate many pdfs.

Normalization requries that

N

i=1
Continuous Random Variables 8–7 Continuous Random Variables

ci(bi − ai) = 1
8–8

EE 178

EE 178

 

fX (x)

• Exponential pdf (continuous analog to geometric pmf) Fix a parameter λ > 0. fX (x) = λe−λx; x ≥ 0
fX (x) = λe−λx x

c2 c1 c3 a1 b1 = a2 b2 = a3

¡
b3

x

Model for how long you wait for a bus, a train, a radioactive particle to arrive at a Geiger counter, a packet to arrive at a router.

Continuous Random Variables

8–9

Continuous Random Variables

8–10

EE 178

EE 178

• Laplacian pdf (two-sided exponential) Fix a parameter λ > 0. λ fX (x) = e−λ|x| 2 Good model for behavior of quantities that arise in many signal processing applications. In particular, models prediction errors in predictive coding systems and transform coefficients (Fourier, wavelet). Common model in speech and image processing. • Gaussian (normal) pdf Fix parameters m and σ > 0. fX (x) = √
(x−m)2 1 − e 2σ2 2πσ

Expectation
• Expected value: E[X] =
∞ xfX (x) dx −∞ ∞ g(x)fX (x) dx −∞

• Expectation theorem: E[g(X)] = • Variance:
∞ 2 σX

=
−∞

(x − E[X])2fX (x) dx

= E[X 2] − E[X]2 • Mean and variance of an affine function (linear + constant): Y = aX + b E[Y ] = aE[X] + b
2 σY
Continuous Random Variables

Gaussian is perhaps most important single pdf. Will see tha it is a good model for density of quantities formed by summing a large number of small random effects (central limit theorem). Often abbreviate “Gaussian pdf with parameters m and σ 2” to N (m, σ 2). The special case m = 0 and σ 2 = 1 is called the “standard normal” pdf.
Continuous Random Variables 8–11

2 = a2σX
8–12

EE 178

EE 178

Examples
All the following moments can be computed by brute force using calculus. Will later see easier methods, so the computations are left as an exercise. • Uniform RV

Cumulative Distribution Functions (CDF’s)
For discrete problems use pmf’s, for continuous problems use pdf’s. Many real-world problems are mixed, having both discrete and continuous components. E.g., a packet arrives at a router. If the input buffer is empty (probability = p), the packet is serviced immediately. Otherwise the packet must wait for a random amount of time as characterized by a pdf. There is a third probability function (a function of points used to compute probabilities) that characterizes all of these cases — discrete, continuous, and mixed. The cumulative distribution function or cdf FX (x) of a random variable is defined by FX (x) = P (X ≤ x)

E[X] =

a+b 2 (b − a)2 ; σX = 2 12

• Exponential RV

1 2 1 E[X] = ; σX = 2 λ λ 2 λ2

• Laplacian RV

2 E[X] = 0; σX =

• Gaussian

2 E[X] = m; σX = σ 2

Continuous Random Variables

8–13

Continuous Random Variables

8–14

EE 178

EE 178

Observations: • Like the pmf and unlike the pdf, the cdf is the probability of something. Hence it is nonnegative and no greater than 1. Normalization says that FX (∞) = 1 • The probability of an interval can be easily computed from a cdf by P ((a, b]) = P (a < X ≤ b)

• Suppose that the RV X is continuous and is described by a pdf fX (x). Then P (a < X ≤ b) = FX (b) − FX (a)
b

=
a

fX (x) dx

From the fundamental theorem of calculus ⇒ fX (x) = and FX (x) =
−∞

dFX (x) dx fX (α) dα

= FX (b) − FX (a)

= P (X ≤ b) − P (X ≤ a) (additivity)

x

• The previous property proves that FX (x) is a nondecreasing function of its argument, i.e., either it grows or it stays the same.

• Suppose that the RV X is purely discrete and is described by a pmf. Suppose further that X takes on only integer values. In this case pX (x) = P (X ≤ x) − P (X ≤ x − 1) = FX (x) − FX (x − 1)
Continuous Random Variables 8–16

Continuous Random Variables

8–15

EE 178

EE 178

Examples

• Exponential RV

FX (x) = 1 − e−λx; x ≥ 0

pX (2)

• Mixed example: buffer empty with probability p, if not empty wait a random time described by exponential density.
1 2 3 4 1 2 3 4

• CDF of a discrete RV

• CDF of a continuous RV: Uniform pdf:  x 1  a b−a dα = FX (x) = 0  1

x−a b−a

a≤x≤b x<a x≥1

 x<0 0 x=0 FX (x) = p  p + (1 − p)(1 − e−λx) x > 0 There is no nice closed form for the cdf of a Gaussian RV, but there are many published tables for the cdf of a standard normal pdf, a quantity often called a Φ function: r −t2 1 e 2 Φ(r) = √ 2π −∞ Φ and related functions are widely tabulated in tables (and computers). Related functions are the error function (erf) and the Q-function. These are related by variable transformations. The cdf for an arbitrary Gaussian cdf can then be found by further variable transformations.
8–17 Continuous Random Variables 8–18

1 b−a

1

a

b

a

b

Continuous Random Variables

EE 178

Why Are CDF’s Interesting?
• Characterize probability for discrete, continuous, and mixed random variables. • Often it is easier to find pdf’s by first finding the cdf and then differentiating. Occasionally also simpler for discrete cases. As an example of the second item, suppose you are allowed to take a test m times and that your final score X will be the maximum: X = max{X1, X2, . . . , Xm} Assuming the Xi are mutually independent, what is the pmf or pdf for X? Easiest to find cdf first: FX (x) = P (X ≤ x) = P (X1 ≤ x, X2 ≤ x, . . . , Xm ≤ x) = P (X1 ≤ x)P (X2 ≤ x) · · · P (Xm ≤ x) = FX1 (x)FX2 (x) · · · FXm (x)

Continuous Random Variables

8–19

Sign up to vote on this title
UsefulNot useful