You are on page 1of 4

Probability

Lecture :: Random Variables

1.1 Random Variables

A real-value function defined on the outcome of a probability is called random variable. If X is a random
variable, then the function F (x) defined by

F (x) = P {X ≤ x}

is called distribution function of X. All probabilities concerning X can be stated in terms of F . A random
variable whose set of possible values is either finite or countably infinite is called discrete. If X is a discrete
random variable, then the function
p(x) = P {X = x}
is called the probability mass function of X. Also, the quantity E[X] defined by
X
E[X] = xp(x)
x:p(x)>0

is called the expected value of X. E[X] is also commonly alled the mean or the expectation of X.
A useful identity states that for a function g,
X
E[g(x)] = g(x)p(x)
x:p(x)>0

The variance of a random variable X, denoted by V ar(X), is defined by

V ar(X) = E[(X − E[X])2 ]

The variance, which is equal to the expected square of the difference between X and its expected value, is a
measure of the spread of the possible values of X. A useful identity is

V ar(X) = E[X 2 ] − (E[X])2


p
The quantity V ar(X) is called the standard deviation of X. A useful identity is

V ar(aX + b) = a2 V ar(X)

A random variable X is a continuous if there is a nonnegative function f , called the probability density
function of X such that, for any set B,
Z
P {X ∈ B} = f (x)dx
B

If X is continuous, then its distribution function F will be differentiable and


d
F (x) = f (x)
dx

1-1
1-2 Lecture 1:

The expected value of a continuous random variable X is defined by


Z ∞
E[X] = xf (x)dx
−∞

A useful identity is that for any function g


Z ∞
E[g(x)] = g(x)f (x)dx
−∞

1.2 Discrete Random Variable

1.2.1 Binomial Random Variable

Binomial random variable can be interpreted as being the number of successes that occur when n independent
trails, each of which results in a success with probability p, are performed. Probability mass function is given
by
 
n i
p(i) = p (1 − p)n−i , i = 0, · · · , n
i
Its mean and variance are given by

E[X] = np V ar(X) = np(1 − p)

1.2.2 Poisson random variable

Probability mass function is given by


e−λ λi
p(i) = i≥0
i!
If a large number of (approximately) independent trails are performed, each have a small probability of being
successful, then the number of successful trails that result will have a distribution that is approximately that
of a Poisson random variable. The mean and variance of a Poisson random variable are both equal to its
parameter λ. That is,
E[X] = V ar[X] = λ

1.2.3 Geometric random variable

The random variable X whose probability mass function is given by

p(i) = p(1 − p)i−1 i = 1, 2, · · ·

is said to be a geometric random variable with parameter p. Such a random variable presents the trail
number of the first success when each trail is independently a success with probability p. Its
mean and variance are given by
1 1−p
E[X] = V ar(X) =
p p2
Lecture 1:General Probability Theory 1-3

1.2.4 Negative binomial random variable

The random variable X whose probability mass function is given by


 
i−1 r
p(i) = p (1 − p)i−r i ≥ r
r−1
is said to be a negative binomial random variable random variable with parameters r and p. Such a random
variable presents the traial number of the rth success when each trail is independently a success
with probability p. Its mean and variance are given by
r r(1 − p)
E[X] = V ar(X) =
p p2

1.2.5 Hypergeometric random variable

A hypergeometric random variable X with parameters n, N , and m represents the number of white balls
selected when n balls are randoly chosen from an urn that contains N balls of which m are
white. The probability mass function of this random varable is given by
m N −m
 
i n−i
p(i) = N
 i = 0, · · · , m
n
m
With p = N, its mean and variance are
N −n
E[X] = np V ar(X) = np(1 − p)
N −1

1.3 Continuous Random Variable

1.3.1 Uniform Random Variable

A random variable X is said to be uniform over the interval (a, b) if its probability density function is given
by
 1

a≤x≤b
f (x) = b − a
0 otherwise

Its expected value and variance are


a+b (a + b)2
E[X] = V ar[X] =
2 12

1.3.2 Normal Random Variable

A random variable X is said to be normal with parameters µ and σ 2 if its probability density function is
given by
1 2 2
f (x) = √ e−(x−u) /2σ −∞≤x≤∞
2πσ
It can be shown that
µ = E[X] σ 2 = V ar(X)
1-4 Lecture 1:

1.3.3 Exponential Random Variable

A random variable whose probability density function is of the form


(
λe−λ x≤0
f (x) =
0 otherwise

is said to be an exponential random variable with parameter λ. Its expected value and variance are, respec-
tively,
1 1
E[X] = V ar[X] = 2
λ λ

You might also like