You are on page 1of 8


This chapter is all about

1. Another method to study our events of interest
2. Modeling probability by creating catalogs of common distributions and their

Learning Objectives:
(2) MAIN Objectives
1. To grasp the concept of a random variable
2. To grasp the concept of different distribution functions

*along the way we will also learn

1. To compute probabilities using these distribution functions
1. To compute expectations using these distribution functions
2. To generate moments

3.1.1 Random Variables
3.1.2 Cumulative Distribution Function


3.2.1 Discrete Random Variables
3.2.2 Probability Mass Distribution (PMF)

3.3.1 Continuous Random Variables

3.3.2 Probability Density Distribution (PDF)


3.4.1 Expected values of X
3.4.2 Expected valued of g(X)
3.4.3 Variance of X
3.4.4 Properties
I. Properties of Expectations
II. Properties of Variance

3.1.1 Random Variables

I. Random Variable
→ a function whose value is a real number that is determined by each
sample point in the sample space achieved through a random experiment
→ notation: uppercase letters *mostly X
**lowercase letters mostly 'x' as one of its values

II. Random Variable as a Function [constraints]

Random Variable as a function [ Ω → R ]

→ since a function,
each outcome in the sample space must be mapped to exactly one
real number

→ this assures us that the random variable X will have one and only one
realized value, whatever the outcome of the random experiment

III. Random Variable as an Event

Random Variable as an Event

→ meaning, random variable is a new way of expressing event

Examples: *to better understand

the event containing all sample points that is the value for the random
variable X is less than or equal to x

the event containing all sample points that is the value for the random
variable is greater than x

the event containing all sample points that is the value for the random
variable X is between a and b

1. Translating events to random variables format
1.) Determine what is X
2.) set the relationship based on the given to one of its values
Let X be the number of tails

A = is the event observing atleast one tail

A = { (T,T), (T,H), (H, T) }
Given that then, X > 0 or X ≥ 1

VI. Indicator Function

Let Ω be the universal set : sample space
A be the subset of Ω : the event that is a subset of sample space
: one of the partitions of Ω
Indicator Function
→ the function that indicates partitions of the sample space Ω that is, it
indicates the A's

1. Translating a piecewise function to an indicator function
{ x , 0<x<1
f(x) = { 2 – x , 1 ≤x≤2
{0 , elsewhere
indicator function: f(x) = xI( 0 ,1 )( x ) +( 2 – x )I[ 1, 2 ](x)
2. Evaluation of indicator function
Given: Indicator Function
known value of x
Find: f(known value of x)
1. Plug-in the known value of x to the indicator function
2. delete terms where x is not an element of the interval
3. evaluate


I. Definitions
→ notation: F(x)
→ a function defined for any real number x as:
F(x) = P( X ≤ x )
→ usage: use to compute for the probability of an even that is
expressed in terms of the random variable
II. Properties
1. 0 ≤ F(x) ≤ 1 since it's equated to a probability
2. It is a nondecreasing function
3. Every random variable will have one and only one CDF
*rather, every known value x of X will have one and only one CDF

III. Templates
(4) Four Main Templates
1. P ( a < X ≤ b ) = FX(b) – FX(a)
2. P ( X ≤ a ) = FX (a)
3. P ( X >a ) = 1- FX (a)
4. P( X = a ) = P ( a- < X ≤ a ) = FX(a) - FX(a-)
*any changes in the inequality in a way of being not equal or equal to will get a
ex: P( X < a) = FX (a-)

1. Computing the probability through CDF
Given: CDF , probabilities in question
Find: Probabilities
1. Translate probabilities in question in the form of the left hand-side of CDF
example: P(X < 1) = FX(1-) [ like the reverse of the given form of it ]
*reduced further to the template if needed
2. Find in the CDF what satisfies it

Discrete Random Space

→ sample space that is finite or countably finite

Discrete Random Variable

→ a subset that is an event by which is finite or countably finite

3.2.2 Probability Mass Function (PMF) of a discrete variable

I. Definitions
→ notation: p(x)
→ a function defined for any real number x as
p(x) = P(X=x)

→ usage: use PMF to compute for the probabilities expressed in terms of X

: use PMF to compute summary measures like the mean and the
standard deviation

mass points
→ the value of the discrete random variable X for which p(x)>0

1. Getting Probability through PMF
Given: the random variable X that is an event is known for what it represents
: the sample space
Find: Probability through PMF
* Identify the mass points of X / possible values of X
*refers to the range of the function X
*given X and plugged into p(X) what are the possible values then?
* Table
1. columns: possible values of X which are under the heading 'x'
Events associated with X=x
p(x) = P(X=x) *read as-- what is the probability given the event X is x
as seen from the events associated

1. p(x) > 0 : where x is a mass point
2. ∑ p(x) = 1

1. see in the table the needed probability
3.3.1 Continuous Random Variable

Continuous Sample Space

→ a sample space that isn't countable or where its sample points cannot
be put in one to one correspondence with counting numbers; mostly intervals

Continuous Random Variable

→ a subset of that sample space

3.3.2 Probability Density Function (PDF) of a continuous random variable X

I. Definition
Probability Density Function (PDF) of a continuous random variable X
→ notation: f(x)
→ a function defined for any real number x and
satisfy the following properties:
1. f(x) > 0 for all x
2. the area below the the whole curve f(x) and above the x-axis is always
equal to 1
3. P( a ≤ X ≤ b ) is the area bounded by the curve f(x), the x-axis and the
lines x=a and x=b
→ usage: to compute probabilities
: to compute summary measures

Given: PDF *it's given and not constructed this time
interval of interest
Find: Probability
1. f(x) > 0 : see each term in the function this time if there will be a negative
2. ∫from negative infinity to infinity f(x) dx =1

1. Write the probability function with regards to the probability of interest
2. now from negative infinity to infinity or the given interval,
equate #1 to the rewritten integral with regards to the interval of interest
3. Then arrive at the needed probability

→ when evaluating the probabilities of events expressed in terms of a
continuuos random variable, it *does not matter whether we are dealing with
the event X < a or X ≤ a since P(X=a) = 0.
therefore, P(X < a) = P(X ≤ a) or the inequality with or without equal is equivalent
**the discrete variable do not share this property

3.5.1 Expected Value of X [specific]

I. Definition
Let X be a random variable
Expected Value of X [ or mean of X ]
→ notation: E(X) or μX
defined by:
{ ∑for all m.p xp(x) , if X is discrete
E(X) = { ∫from negative infinity to infinity xfX(x)dx , if continuous

→ E(X) is actually a weighted mean of the values that the random variable
takes on, where each value is weighted by the probability that the random
variable is equal to that value

Given: a PMF (the table) or PDF (the indicator function)
Find: E(X)
1. Determine if discrete of continuous
2. If discrete, use the E(X) = ∑for all m.p xp(x)
If continuous, use the E(X) = ∫from negative infinity to infinity xfX(x)dx

3.5.2 Expected Values of G(X)

I. Definition
Let X be a random variable

Expected Value of g(x)

→ notation: E(g(x))
{ ∑for all m.p g(x)p(x) , if X is discrete
E(X) = { ∫from negative infinity to infinity g(x)fX(x)dx , if continuous

Given: a word problem
with events
Find: what is needed
*Construct table
1. One for each event
2. Solve events based on x

1. then use ∑for all m.p xp(x) *this depends on the wording of the target
3.5.3 Variance of X

I. Definition
Let X be a random variable with mean, μ
Variance of X
→ notation: σ2 or Var(X)

σ2 = Var(X) = E(X- μ )2

*the standard deviation is the positive square root of the variance

→ still a measure of dispersion and the average squared difference between
the value of X and μ.
→ also being mean, is also in terms of expectations

3.5.4 Properties

I. Properties of Expectation

1. (E-μ) = 0

2. E(aX + b) = aE(X) + b
if b=0, E(aX) = aE(X)
if a=0, E(b) = b

3. E(X+Y) = E(X) + E(Y)

4. E(X + Y) = E(X) – E(Y)

5. E(XY)= E(X)E(Y) *only if X and Y are independent variables

II. Properties of Variance

1. Var(aX+b) = a2Var(X)
if b=0, then Var(aX) = a2Var(X)
if a=0, then Var(b) = 0

2. Var (X+Y) = Var(X) + Var(Y)

3. Var (X-Y) = Var(X) + Var(Y) *only if X and Y are independent variables

4. Var(X) = E(X2) – [E(X)]2

1. Evaluate Expectations or Variance
Given: Expectations or Variance
Find: its equivalent real number
1. Reduce it to the given forms above
2. Evaluate using the given values