You are on page 1of 47

Probability

Chapter 8

JOINTLY DISTRIBUTED RANDOM VARIABLES


1. Joint Distribution Functions 5. Functions of Two Random
2. Conditional Distributions Variables
3. Conditional Expectation and 6. Joint Probability Distribution of
Variance Functions of Random Variables
4. Independent Random Variables 7. Central Limit Theorem

1st Semester 2023


Probability

1. Joint Distribution Functions


• For any two random variables 𝑋 and 𝑌 , the joint cumulative
probability distribution function of 𝑋 and 𝑌 is defined by

𝐹 𝑎, 𝑏 = 𝑃 𝑋 ≤ 𝑎, 𝑌 ≤ 𝑏 − ∞ < 𝑎, 𝑏 < ∞

• The distribution of 𝑋 or 𝑌 can be obtained from the joint distribution


of 𝑋 and 𝑌 as follows:
𝐹𝑋 𝑎 = 𝐹(𝑎, ∞)
𝐹𝑌 𝑏 = 𝐹 ∞, 𝑏
• The distribution functions 𝐹𝑋 and 𝐹𝑌 are sometimes referred to as the
marginal distributions of 𝑋 and 𝑌.

1st Semester 2023


Probability

1. Joint Distribution Functions


Discrete Case
• In the case when 𝑋 and 𝑌 are both discrete random variables, it is
convenient to define the joint probability mass function of 𝑋 and 𝑌 by
𝑝 𝑥, 𝑦 = 𝑃(𝑋 = 𝑥, 𝑌 = 𝑦)
• The probability mass function of 𝑋 can be obtained from 𝑝 𝑥, 𝑦 by

𝑝𝑋 𝑥 = 𝑃 𝑋 = 𝑥 = 𝑝(𝑥, 𝑦)
𝑦:𝑝 𝑥,𝑦 >0
• Similarly,

𝑝𝑌 𝑦 = 𝑃 𝑌 = 𝑦 = 𝑝(𝑥, 𝑦)
𝑥:𝑝 𝑥,𝑦 >0

1st Semester 2023


Probability

1. Joint Distribution Functions


Example 8.1: Suppose that 3 balls are randomly selected from an urn
containing 3 red, 4 white, and 5 blue balls. If we let 𝑋 and 𝑌 denote,
respectively, the number of red and white balls chosen. Find the joint
probability mass function of 𝑋 and 𝑌, the probability mass function of
𝑋, probability mass function of 𝑌

1st Semester 2023


Probability

1. Joint Distribution Functions


Continuous Case
• We say that 𝑋 and 𝑌 are jointly continuous if there exists a function
𝑓(𝑥, 𝑦), defined for all real 𝑥 and 𝑦, having the property that for every
set 𝐶 of pairs of real numbers

𝑃 𝑋, 𝑌 ∈ 𝐶 = 𝑓 𝑥, 𝑦 𝑑𝑥𝑑𝑦
(𝑥,𝑦)∈𝐶
• The function 𝑓(𝑥, 𝑦) is called the joint probability density function of
𝑋 and 𝑌.
• If 𝐴 and 𝐵 are any sets of real numbers

𝑃 𝑋 ∈ 𝐴, 𝑌 ∈ 𝐵 = 𝑓 𝑥, 𝑦 𝑑𝑥𝑑𝑦
𝐵 𝐴
1st Semester 2023
Probability

1. Joint Distribution Functions


• In particular, we have
𝑎 𝑏

𝐹 𝑎, 𝑏 = 𝑓 𝑥, 𝑦 𝑑𝑥𝑑𝑦
−∞ −∞

• Hence, when we differentiate, we obtain

𝜕2
𝑓 𝑎, 𝑏 = 𝐹(𝑎, 𝑏)
𝜕𝑎𝜕𝑏

1st Semester 2023


Probability

1. Joint Distribution Functions


• If 𝑋 and 𝑌 are jointly continuous, they are individually continuous,
and their probability density functions
+∞

𝑓𝑋 𝑥 = 𝑓 𝑥, 𝑦 𝑑𝑦
−∞

+∞

𝑓𝑌 𝑦 = 𝑓 𝑥, 𝑦 𝑑𝑥
−∞

1st Semester 2023


Probability

1. Joint Distribution Functions


Example 8.2: Consider the joint probability distribution function
𝑘𝑥 for 0 < x < 1, 0 < y < 1
𝑓 𝑥, 𝑦 =
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
where 𝑘 is a constant. Find 𝑘?

1st Semester 2023


Probability

1. Joint Distribution Functions


Example 8.3: The joint density function of 𝑋 and 𝑌 is given by
2𝑒 −𝑥 𝑒 −2𝑦 0 < 𝑥 < ∞, 0 < 𝑦 < ∞
𝑓 𝑥, 𝑦 =
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Compute
(a) 𝑃{𝑋 > 1, 𝑌 < 1}
(b) 𝑃{𝑋 < 𝑌}
(c) 𝑃{𝑋 < 𝑎}

1st Semester 2023


Probability

1. Joint Distribution Functions


Example 8.4: An insurance company insures a large number of drivers.
Let 𝑋 be the random variable representing the company’s losses under
collision insurance, and let 𝑌 represent the company’s losses under
liability insurance. 𝑋 and 𝑌 have a joint probability distribution function
2𝑥 + 2 − 𝑦
𝑓 𝑥, 𝑦 = 0 < 𝑥 < 1, 0 < 𝑦 < 2
4
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
What is the probability that the total loss is at least 1?

1st Semester 2023


Probability

2. Conditional Distributions
Discrete Case
• If 𝑋 and 𝑌 are discrete random variables, the conditional probability
mass function of 𝑋 given that 𝑌 = 𝑦 is defined by
𝑃 𝑋 = 𝑥, 𝑌 = 𝑦 𝑝(𝑥, 𝑦)
𝑝𝑋|𝑌 𝑥 𝑦 = 𝑃 𝑋 = 𝑥 𝑌 = 𝑦 = =
𝑃 𝑌=𝑦 𝑝𝑌 (𝑦)
for all values of 𝑦 such that 𝑝𝑌 𝑦 > 0
• The conditional probability distribution function of 𝑋 given that
𝑌 = 𝑦 is defined, for all 𝑦 such that 𝑝𝑌 𝑦 > 0

𝐹𝑋|𝑌 𝑥 𝑦 = 𝑃 𝑋 ≤ 𝑥 𝑌 = 𝑦 = 𝑝𝑋|𝑌 𝑎 𝑦
𝑎≤𝑥

1st Semester 2023


Probability

2. Conditional Distributions
Example 8.5: Suppose that 𝑝(𝑥, 𝑦), the joint probability mass function of 𝑋
and 𝑌, is given by
𝑝(0, 0) = 0.4
𝑝(0, 1) = 0.2
𝑝(1, 0) = 0.1
𝑝(1, 1) = 0.3
Calculate the conditional probability mass function of 𝑋 given that 𝑌 = 1.

1st Semester 2023


Probability

2. Conditional Distributions
Continuous Case
• If 𝑋 and 𝑌 have a joint probability density function 𝑓(𝑥, 𝑦), then the
conditional probability density function of 𝑋 given that 𝑌 = 𝑦 is
defined, for all values of 𝑦 such that 𝑓𝑌 𝑦 > 0, by

𝑓(𝑥, 𝑦)
𝑓𝑋|𝑌 𝑥𝑦 =
𝑓𝑌 (𝑦)

1st Semester 2023


Probability

2. Conditional Distributions
Example 8.7: A company is studying the amount of sick leave taken by
its employees. The company allows a maximum of 100 hours of paid
sick leave in a year. The random variable 𝑋 represents the leave time
taken by a randomly selected employee last year. The random variable
𝑌 represents the leave time taken by the same employee this year. Each
random variable is measured in hundreds of hours, e.g., 𝑋 =
0.50 means that the employee took 50 hours last year. Thus 𝑋 and
𝑌 assume values in the interval [0, 1]. The joint probability density
function for 𝑋 and 𝑌 is
𝑓 𝑥, 𝑦 = 2 − 1.2𝑥 − 0.8𝑦 𝑤𝑖𝑡ℎ 0 ≤ 𝑥, 𝑦 ≤ 1
(a) Find 𝑓𝑋 𝑥 , 𝑓𝑌 𝑥
(b) Find 𝑓𝑋|𝑌 𝑥 𝑦 , 𝑓𝑌|𝑋 𝑦 𝑥
1st Semester 2023
Probability

3. Conditional Expectation and Variance


Discrete Case
Conditional Expectation
𝐸 𝑋𝑌=𝑦 = 𝑥. 𝑝𝑋|𝑌 (𝑥|𝑦)
𝑥
𝐸 𝑌𝑋=𝑥 = 𝑦. 𝑝𝑌|𝑋 (𝑦|𝑥)
𝑦
Conditional Variance
𝑉𝑎𝑟 𝑋 𝑌 = 𝑦 = 𝐸((𝑥 − 𝐸(𝑋│𝑌 = 𝑦))2 𝑌 = 𝑦
= 𝐸 𝑋 2 𝑌 = 𝑦 − (𝐸 𝑋 𝑌 = 𝑦 )2

1st Semester 2023


Probability

3. Conditional Expectation and Variance


Example 8.8: Suppose that 3 balls are randomly selected from an urn
containing 3 red, 4 white, and 5 blue balls. If we let 𝑋 and 𝑌 denote,
respectively, the number of red and white balls chosen.
(a) Find the joint probability mass function of 𝑋 and 𝑌, the probability
mass function of 𝑋, probability mass function of 𝑌.
(b) Find 𝐸(𝑋|𝑌 = 0)
(c) Find 𝑉𝑎𝑟(𝑌|𝑋 = 1)

1st Semester 2023


Probability

3. Conditional Expectation and Variance


Continuous Case
Conditional Expectation

𝐸 𝑋𝑌=𝑦 = 𝑥𝑓𝑋|𝑌 𝑥 𝑦 𝑑𝑥
−∞

𝐸 𝑌𝑋=𝑥 = 𝑦𝑓𝑌|𝑋 𝑦 𝑥 𝑑𝑦
−∞

1st Semester 2023


Probability

3. Conditional Expectation and Variance


Continuous Case
Conditional Variance
𝑉𝑎𝑟 𝑋 𝑌 = 𝑦 = 𝐸((𝑥 − 𝐸(𝑋│𝑌 = 𝑦))2 𝑌 = 𝑦
= 𝐸 𝑋 2 𝑌 = 𝑦 − (𝐸 𝑋 𝑌 = 𝑦 )2
Where

𝐸 𝑋2 𝑌 = 𝑦 = 𝑥 2 𝑓𝑋|𝑌 𝑥 𝑦 𝑑𝑥
−∞

1st Semester 2023


Probability

3. Conditional Expectation and Variance


Example 8.9: A company is studying the amount of sick leave taken by
its employees. The company allows a maximum of 100 hours of paid
sick leave in a year. The random variable 𝑋 represents the leave time
taken by a randomly selected employee last year. The random variable
𝑌 represents the leave time taken by the same employee this year. Each
random variable is measured in hundreds of hours, e.g., 𝑋 =
0.50 means that the employee took 50 hours last year. Thus 𝑋 and
𝑌 assume values in the interval [0, 1]. The joint probability density
function for 𝑋 and 𝑌 is
𝑓 𝑥, 𝑦 = 2 − 1.2𝑥 − 0.8𝑦 𝑤𝑖𝑡ℎ 0 ≤ 𝑥, 𝑦 ≤ 1
Find 𝐸(𝑌|𝑋 = 0.1)
1st Semester 2023
Probability

4. Independent Random Variables


• The random variables 𝑋 and 𝑌 are said to be independent if, for any two
sets of real numbers 𝐴 and 𝐵,
𝑃 𝑋 ∈ 𝐴, 𝑌 ∈ 𝐵 = 𝑃 𝑋 ∈ 𝐴 . 𝑃{𝑌 ∈ 𝐵}
• In terms of the joint distribution function 𝐹 of 𝑋 and 𝑌, 𝑋 and 𝑌 are
independent if
𝐹 𝑎, 𝑏 = 𝐹𝑋 𝑎 . 𝐹𝑌 𝑏 for all 𝑎, 𝑏
• When 𝑋 and 𝑌 are discrete random variables, the condition of
independence is equivalent to
𝑝 𝑥, 𝑦 = 𝑝𝑋 𝑥 . 𝑝𝑌 𝑦 for all 𝑥, 𝑦
• In the jointly continuous case, the condition of independence is equivalent
to
𝑓 𝑥, 𝑦 = 𝑓𝑋 𝑥 . 𝑓𝑌 𝑦 for all 𝑥, 𝑦
1st Semester 2023
Probability

4. Independent Random Variables


𝑥𝑦+𝑦
Example 8.10: Let 𝑝 𝑥, 𝑦 = for 𝑥 = 1, 2, 3 and 𝑦 = 1, 2, be the
27
joint probability for the random variables 𝑋 and 𝑌. Construct a table of
the joint probabilities of 𝑋 and 𝑌 and the marginal probabilities of 𝑋
and 𝑌. Determine if 𝑋 and 𝑌 are dependent or independent.

1st Semester 2023


Probability

4. Independent Random Variables


Example 8.11: The joint density of two random variable 𝑋 and 𝑌 is
given by
𝑥𝑒 −(𝑥+𝑦) 𝑥 > 0, 𝑦 > 0
𝑓 𝑥, 𝑦 =
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Are the random variable 𝑋 and 𝑌 independent?

1st Semester 2023


Probability

4. Independent Random Variables


Example 8.12: An insurance company sells two types of auto insurance
policies: Basic and Deluxe. The time until the next Basic policy claim is
an exponential random variable with mean two days. The time until the
next Deluxe policy claim is an independent exponential random
variable with mean three days. What is the probability that the next
claim will be a Deluxe policy claim?

1st Semester 2023


Probability

5. Functions of Two Random Variables


The Sum of Two Discrete Random Variables
𝑋 and 𝑌 are two discrete random variables. Let 𝑆 = 𝑋 + 𝑌
Then 𝑆 is a discrete random variable with
𝑝𝑆 𝑠 = 𝑝(𝑥, 𝑠 − 𝑥)
𝑥

1st Semester 2023


Probability

5. Functions of Two Random Variables


The Sum of Two Discrete Random Variables
Example 8.13: 𝑋 and 𝑌 are two discrete random variables with the joint
probability mass function is given as follow

𝑥 90 100 110
𝑦
0 0.05 0.27 0.18
10 0.15 0.33 0.02

Find the distribution of 𝑆 = 𝑋 + 𝑌

1st Semester 2023


Probability

5. Functions of Two Random Variables


The Sum of Two Independent Discrete Random Variables
𝑋 and 𝑌 be two independent discrete random variables. Probability
function for 𝑆 = 𝑋 + 𝑌 is
𝑝𝑆 𝑠 = 𝑝𝑋 𝑥 . 𝑝𝑌 𝑠 − 𝑥
𝑥

1st Semester 2023


Probability

5. Functions of Two Random Variables


The Sum of Two Independent Discrete Random Variables
Example 8.14: An insurance company has two clients. The random
variables representing the number of claims filed by each client are 𝑋
and 𝑌. 𝑋 and 𝑌 are independent, and each has the same probability
distribution.

𝑥 0 1 2 𝑦
𝑝𝑋 (𝑥) ½ ¼ ¼ 𝑝𝑌 (𝑦)

Find the distribution for 𝑆 = 𝑋 + 𝑌

1st Semester 2023


Probability

5. Functions of Two Random Variables


Sum of two independent continuous random variables
• Suppose that 𝑋 and 𝑌 are independent, continuous random variables
having probability density functions 𝑓𝑋 and 𝑓𝑌 . The cumulative
distribution function of 𝑋 + 𝑌∞is obtained as follows:

𝐹𝑋+𝑌 𝑎 = 𝐹𝑋 𝑎 − 𝑦 𝑓𝑌 𝑦 𝑑𝑦
−∞
• The cumulative distribution function 𝐹𝑋+𝑌 is called the convolution of
the distributions 𝐹𝑋 and 𝐹𝑌
• The joint probability density function

is

𝑓𝑋+𝑌 𝑎 = 𝑓𝑋 𝑎 − 𝑦 𝑓𝑌 𝑦 𝑑𝑦
−∞
1st Semester 2023
Probability

5. Functions of Two Random Variables


Sum of Uniform Random Variables
Example 8.15: If 𝑋 and 𝑌 are independent random variables, both
uniformly distributed on (0, 1), calculate the probability density of
𝑋 + 𝑌.

1st Semester 2023


Probability

6. Expected values of Functions of Random variables


Finding E(g(X, Y))
• Let 𝑋 and 𝑌 be random variables and let 𝑔(𝑥, 𝑦) be a function of two
variables.
• If 𝑋 and 𝑌 are discrete with joint probability function 𝑝(𝑥, 𝑦),
𝐸 𝑔 𝑋, 𝑌 = 𝑔 𝑥, 𝑦 𝑝(𝑥, 𝑦)
𝑥 𝑦
• If 𝑋 and 𝑌 are continuous with∞joint

density 𝑓(𝑥, 𝑦),

𝐸 𝑔 𝑋, 𝑌 = 𝑔 𝑥, 𝑦 𝑓 𝑥, 𝑦 𝑑𝑥𝑑𝑦
−∞ −∞

1st Semester 2023


Probability

6. Expected values of Functions of Random variables


Example 8.24: 𝑋 and 𝑌 are two discrete random variables with the joint
probability mass function is given as follow
𝑥 90 100 110
𝑦
0 0.05 0.27 0.18
10 0.15 0.33 0.02

(a) Find 𝐸(𝑋 + 𝑌), check if 𝐸(𝑋 + 𝑌) = 𝐸(𝑋) + 𝐸(𝑌) or not


(b) Find 𝐸(𝑋𝑌)

1st Semester 2023


Probability

6. Expected values of Functions of Random variables


Example 8.25: A company is studying the amount of sick leave taken by
its employees. The company allows a maximum of 100 hours of paid
sick leave in a year. The random variable 𝑋 represents the leave time
taken by a randomly selected employee last year. The random variable
𝑌 represents the leave time taken by the same employee this year. Each
random variable is measured in hundreds of hours, e.g., 𝑋 =
0.50 means that the employee took 50 hours last year. Thus 𝑋 and
𝑌 assume values in the interval [0, 1]. The joint probability density
function for 𝑋 and 𝑌 is
𝑓 𝑥, 𝑦 = 2 − 1.2𝑥 − 0.8𝑦 𝑤𝑖𝑡ℎ 0 ≤ 𝑥, 𝑦 ≤ 1
Find 𝐸(𝑋𝑌)
1st Semester 2023
Probability

6. Expected values of Functions of Random variables

Note

• E(X + Y) = E(X) + E(Y)


• E(XY) = E(X).E(Y) if 𝑋 and 𝑌 are independent

1st Semester 2023


Probability

6. Expected values of Functions of Random variables


Example 8.26: We look at the continuous random variables 𝑆 and 𝑇,
the time between accidents in towns A and B, respectively. The joint
density function of 𝑆 and 𝑇 is
𝑓 𝑠, 𝑡 = 𝑒 −(𝑠+𝑡) 𝑓𝑜𝑟 𝑠 ≥ 0 𝑎𝑛𝑑 𝑡 ≥ 0
(a) Are 𝑆 and 𝑇 independent?
(b) Find 𝐸(𝑆𝑇)

1st Semester 2023


Probability

6. Expected values of Functions of Random variables


The Covariance of 𝑿 and 𝒀
Let 𝑋 and 𝑌 be random variables. The covariance of 𝑋 and 𝑌 is defined
by
𝑐𝑜𝑣 𝑋, 𝑌 = 𝐸 𝑋 − 𝐸 𝑋 𝑌 − 𝐸 𝑌 = 𝐸 𝑋𝑌 − 𝐸 𝑋 . 𝐸(𝑌)

1st Semester 2023


Probability

6. Expected values of Functions of Random variables


Useful properties of covariance
• 𝑐𝑜𝑣 𝑋, 𝑌 = 0 if 𝑋 and 𝑌 are independent
• 𝑐𝑜𝑣 𝑋, 𝑌 = 𝑐𝑜𝑣 𝑌, 𝑋
• 𝑐𝑜𝑣 𝑋, 𝑋 = 𝑉𝑎𝑟 𝑋
• 𝑐𝑜𝑣 𝑋, 𝑘 = 0 if 𝑘 is a constant random variable
• 𝑐𝑜𝑣 𝑎𝑋, 𝑏𝑌 = 𝑎. 𝑏. 𝑐𝑜𝑣(𝑋, 𝑌)
• 𝑐𝑜𝑣 𝑋, 𝑌 + 𝑍 = 𝑐𝑜𝑣 𝑋, 𝑌 + 𝑐𝑜𝑣 𝑋, 𝑍

1st Semester 2023


Probability

6. Expected values of Functions of Random variables


Example 8.27: 𝑋 and 𝑌 be two discrete random variables with the joint
probability mass function is given as follow
𝑥 90 100 110
𝑦
0 0.05 0.27 0.18
10 0.15 0.33 0.02

Find 𝑐𝑜𝑣(𝑋, 𝑌)

1st Semester 2023


Probability

6. Expected values of Functions of Random variables


Example 8.28: We look at the continuous random variables 𝑆 and 𝑇,
the time between accidents in towns A and B, respectively. The joint
density function of 𝑆 and 𝑇 is
𝑓 𝑠, 𝑡 = 𝑒 −(𝑠+𝑡) 𝑓𝑜𝑟 𝑠 ≥ 0 𝑎𝑛𝑑 𝑡 ≥ 0
Find 𝑐𝑜𝑣(𝑆, 𝑇)

1st Semester 2023


Probability

6. Expected values of Functions of Random variables


The Variance of 𝑿 + 𝒀
𝑉𝑎𝑟 𝑋 + 𝑌 = 𝑉𝑎𝑟 𝑋 + 𝑉𝑎𝑟 𝑌 + 2𝑐𝑜𝑣 𝑋, 𝑌
When 𝑋 and 𝑌 are independent, we have
𝑉𝑎𝑟 𝑋 + 𝑌 = 𝑉𝑎𝑟 𝑋 + 𝑉𝑎𝑟 𝑌
In general,
𝑉𝑎𝑟 𝑎𝑋 ± 𝑏𝑌 ± 𝑐 = 𝑎2 𝑉𝑎𝑟 𝑋 + 𝑏 2 𝑉𝑎𝑟 𝑌 ± 2𝑎𝑏𝑐𝑜𝑣 𝑋, 𝑌
Where 𝑎, 𝑏, 𝑐 are constant

1st Semester 2023


Probability

6. Expected values of Functions of Random variables


Example 8.29: 𝑋 and 𝑌 are two discrete random variables with the joint
probability mass function is given as follow
𝑥 90 100 110
𝑦
0 0.05 0.27 0.18
10 0.15 0.33 0.02

Find 𝑉𝑎𝑟(𝑋 + 𝑌)

1st Semester 2023


Probability

6. Expected values of Functions of Random variables


Example8.30: A company is studying the amount of sick leave taken by
its employees. The company allows a maximum of 100 hours of paid
sick leave in a year. The random variable 𝑋 represents the leave time
taken by a randomly selected employee last year. The random variable
𝑌 represents the leave time taken by the same employee this year. Each
random variable is measured in hundreds of hours, e.g., 𝑋 =
0.50 means that the employee took 50 hours last year. Thus 𝑋 and
𝑌 assume values in the interval [0, 1]. The joint probability density
function for 𝑋 and 𝑌 is
𝑓 𝑥, 𝑦 = 2 − 1.2𝑥 − 0.8𝑦 𝑤𝑖𝑡ℎ 0 ≤ 𝑥, 𝑦 ≤ 1
Find 𝑉𝑎𝑟(𝑋 + 𝑌)
1st Semester 2023
Probability

6. Expected values of Functions of Random variables


Example 8.31: We look at the continuous random variables 𝑆 and 𝑇,
the time between accidents in towns A and B, respectively. The joint
density function of 𝑆 and 𝑇 is
𝑓 𝑠, 𝑡 = 𝑒 −(𝑠+𝑡) 𝑓𝑜𝑟 𝑠 ≥ 0 𝑎𝑛𝑑 𝑡 ≥ 0
Find 𝑉𝑎𝑟(𝑆 + 𝑇)

1st Semester 2023


Probability

6. Expected values of Functions of Random variables


The Correlation Coefficient
• Let 𝑋 and 𝑌 be random variables. The correlation coefficient between
𝑋 and 𝑌 is defined by
𝑐𝑜𝑣(𝑋, 𝑌) 𝑐𝑜𝑣(𝑋, 𝑌)
𝜌𝑋𝑌 = =
𝜎𝑋 𝜎𝑌 𝑉𝑎𝑟(𝑋) 𝑉𝑎𝑟(𝑌)

1st Semester 2023


Probability

6. Expected values of Functions of Random variables


The Correlation Coefficient
• −1 ≤ 𝜌𝑋𝑌 ≤ 1
• If 𝑋 and 𝑌 are independent then 𝜌𝑋𝑌 = 0
• If 𝜌𝑋𝑌 = 1, then 𝑌 = 𝑎𝑋 + 𝑏 𝑤𝑖𝑡ℎ 𝑎 > 0
• If 𝜌𝑋𝑌 = −1, then 𝑌 = 𝑎𝑋 + 𝑏 𝑤𝑖𝑡ℎ 𝑎 < 0
• Values of 𝜌𝑋𝑌 closes to ±1 are interpreted as an indication of a high
level of linear association between 𝑋 and 𝑌
• Values of 𝜌𝑋𝑌 near 0 are interpreted as implying little or no linear
relationship between 𝑋 and 𝑌

1st Semester 2023


6. Expected values of Functions of Random variables
The Correlation Coefficient
Example 8.32: A company is studying the amount of sick leave taken by
its employees. The company allows a maximum of 100 hours of paid
sick leave in a year. The random variable 𝑋 represents the leave time
taken by a randomly selected employee last year. The random variable
𝑌 represents the leave time taken by the same employee this year. Each
random variable is measured in hundreds of hours, e.g., 𝑋 =
0.50 means that the employee took 50 hours last year. Thus 𝑋 and
𝑌 assume values in the interval [0, 1]. The joint probability density
function for 𝑋 and 𝑌 is
𝑓 𝑥, 𝑦 = 2 − 1.2𝑥 − 0.8𝑦 𝑤𝑖𝑡ℎ 0 ≤ 𝑥, 𝑦 ≤ 1
Find 𝜌𝑋𝑌
Probability

7. Central Limit Theorem


Let 𝑋1 , 𝑋2, … , 𝑋𝑛, be independent random variables all of which have
the same probability distribution and thus the same mean 𝜇, and
variance 𝜎 2 . Then S = 𝑋1 + 𝑋2 + ⋯ + 𝑋𝑛 will be approximately normal
with mean 𝑛𝜇, and variance 𝑛𝜎 2 .

1st Semester 2023


Probability

7. Central Limit Theorem


Example 8.36: 𝑆 = 𝑋1 + 𝑋2 + ⋯ + 𝑋500 , where the Xi are independent
and identically distributed with mean 0.50 and variance 0.25. Use the
Central Limit Theorem to find 𝑃(235 < 𝑆 < 265).

1st Semester 2023

You might also like