You are on page 1of 395

Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 1

BTRY 4080 / STSCI 4080, Fall 2009

Theory of Probability

Instructor: Ping Li

Department of Statistical Science

Cornell University
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 2

General Information

• Lectures: Tue, Thu 10:10-11:25 am, Malott Hall 253


• Section 1: Mon 2:55 - 4:10 pm, Warren Hall 245
• Section 2: Wed 1:25 - 2:40 pm, Warren Hall 261
• Instructor: Ping Li, pingli@cornell.edu,
Office Hours: Tue, Thu 11:25 am -12 pm, 1192, Comstock Hall

• TA: Xiao Luo, lx42@cornell.edu. Office hours:


(1) Mon, 4:10 - 5:10pm Warren Hall 245;
(2) Wed, 2:40 - 3:40pm, Warren Hall 261.

• Prerequisites: Math 213 or 222 or equivalent; a course in statistical methods


is recommended

• Textbook: Ross, A First Course in Probability, 7th edition, 2006


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 3

• Exams: Locations TBD

– Prelim 1: In Class, September 29, 2009

– Prelim 2: In Class, October 20, 2009

– Prelim 3: In Class, TBD

– Final Exam: Wed. December 16, 2009, from 2pm to 4:30pm.

– Policy: Close book, close notes


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 4

• Homework: Weekly
– Please turn in your homework either in class or to BSCB front desk
(Comstock Hall, 1198).

– No late homework will be accepted.

– Before computing your overall homework grade, the assignment with the
lowest grade (if ≥ 25%) will be dropped, the one with the second lowest
grade (if ≥ 50%) will also be dropped.
– It is the students’ responsibility to keep copies of the submitted homework.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 5

• Grading:
1. The homework: 30%

2. The prelim with the lowest grade: 5%


3. The other two prelims: 15% each
4. The final exam 35%
• Course Letter Grade Assignment
A ≈ 90% (in the absolute scale)
C ≈ 60% (in the absolute scale)

In borderline cases, participation in section and class interactions will be used as


a determining factor.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 6

Material

Topic Textbook

Combinatorics 1.1 - 1.6

Axioms of Probability 2.1 - 2.5

Conditional Probability and Independence 3.1 - 3.5

Discrete Random Variables 4.1 - 4.9

Continuous Random Variables 5.1 - 5.7

Jointly Distributed Random Variables 6.1 - 6.7

Expectation 7.1 - 7.9

As time permits: Limit Theorems 8.1 - 8.6,


Poisson Processes and Markov Chains 9.1 - 9.2 Simulation 10.1 - 10.4
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 7

Probability Is Increasingly Popular & Truly Useful

An Example: Probabilistic Counting

Consider two sets S1 , S2 ∈ Ω = {0, 1, 2, ..., D − 1}, for example

D = 2000
S1 = {0, 15, 31, 193, 261, 495, 563, 919}
S2 = {5, 15, 19, 193, 209, 325, 563, 1029, 1921}

Q: What is the size of intersection a = |S1 ∩ S2 |?

A: Easy! a = |{15, 193, 563}| = 3.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 8

Challenges

• What if the size of space D = 264 ?

• What if there are 10 billion of sets, instead of just 2?

• What if we only need approximate answers?

• What if we need the answers really quickly?

• What if we are mostly interested in sets that are highly similar?

• What if the sets are dynamic and frequently updated?

• ...
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 9

Search Engines

Google/Yahoo!/Bing all collected at least 1010 (10 billion) web pages.

Two examples (among many tasks)

• (Very quickly) Determine if two Web pages are nearly duplicates


A document can be viewed as a collection (set) of words or contiguous words.
This is a counting of set intersection problem.

• (Very quickly) Determine the number of co-occurrences of two words


A word can be viewed as a set of documents containing that word
This is also a counting of set intersection problem.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 10

Term Document Matrix

Doc 1 Doc 2 Doc m

Word 1 1 0 0 1 0 0 0 1

Word 2 0 1 0 1 0 0 1 0

Word 3

Word 4

Word n

≥ 1010 documents (Web pages).


≥ 105 common English words. (What about total number of contiguous words?)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 11

One Method for Probabilistic Counting of Intersections

Consider two sets S1 , S2 ⊂ Ω = {0, 1, 2, ..., D − 1}

S1 = {0, 15, 31, 193, 261, 495, 563, 919}


S2 = {5, 15, 19, 193, 209, 325, 563, 1029, 1921}

One can first generate a random permutation π :

π : Ω = {0, 1, 2, ..., D − 1} −→ {0, 1, 2, ..., D − 1}

Apply π to sets S1 and S2 (and all other sets). For example

π (S1 ) = {13, 139, 476, 597, 698, 1355, 1932, 2532}


π (S2 ) = {13, 236, 476, 683, 798, 1456, 1968, 2532, 3983}
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 12

Store only the minimums

min{π (S1 )} = 13, min{π (S2 )} = 13

Suppose we can theoretically calculate the probability

P = Pr (min{π (S1 )} = min{π (S2 )})

S1 and S2 are more similar =⇒ larger P (i.e., closer to 1).

Generate (small) k random permutations and every times store the minimums.

Count the number of matches to estimate the similarity between S1 and S2 .


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 13

Later, we can easily show that

|S1 ∩ S2 | a
P = Pr (min{π (S1 )} = min{π (S2 )}) = =
|S1 ∪ S2 | f1 + f2 − a
where a = |S1 ∩ S2 |, f1 = |S1 |, f2 = |S2 |.

Therefore, we can estimate P as a binomial probability from k permutations.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 14

Combinatorial Analysis

Ross: Chapter 1, Sections 1.1 - 1.6


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 15

Basic Principles of Counting, Section 1.2

1. The multiplicative principle


A project requires two sub-tasks, A and B.
There are m ways to complete A, and there are n ways to complete B.

Q: How many different ways to complete this project?

2. The additive principle


A project can be completed either by doing task A, or by doing task B.
There are m ways to complete A, and there are n ways to complete B.

Q: How many different ways to complete this project?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 16

The Multiplicative Principle

A B
1
1 2
2
Origin Destination

Q: How many different routes from the origin to reach the destination?

Route 1: { A:1 + B:1}


Route 2: { A:1 + B:2}
...
Route n: { A:1 + B:n}
Route n+1: { A:2 + B:1}
...
Route ? { A:m + B:n}
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 17

The additive Principle

A
1
2

Destination
Origin
m

B
1
2

Q: How many different routes from the origin to reach the destination?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 18

Example:

A college planning committee consists of 3 freshmen, 4 sophomores, 5 juniors,


and 2 seniors. A subcommittee of 4, consisting of 1 person from each class, is to
be chosen.

Q: How many different subcommittees are possible?

—————–

Multiplicative or additive?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 19

Example:

In a 7-place license plate, the first 3 places are to be occupied by letters and the
final 4 by numbers, e.g., ITH-0827.

Q: How many different 7-place license plates are possible?

—————–

Multiplicative or additive?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 20

Example:

How many license plates would be possible if repetition among letters or numbers
were prohibited?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 21

Permutations and Combinations, Section 1.3, 1.4

Need to select 3 letters from 26 English letter, to make a license plate.

Q: How many different arrangements are possible?

———————

The answer differs if

• Order of 3 letters matters =⇒ Permutation


e.g., ITH, TIH, HIT, are all different.

• Order of 3 letters does not matter =⇒ Combination


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 22

Permutations

Basic formula:

Suppose we have n objects. How many different permutations are there?

n × (n − 1) × (n − 2) × ... × 1 = n!
Why?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 23

1 2 3 4 ... n

...

n choices n−1 choices n−2 choices ... 1 choice

Which counting principle?: Multiplicative? additive?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 24

Example:

A baseball team consists of 9 players.

Q: How many different batting orders are possible?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 25

Example:

A class consists of 6 men and 4 women. The students are ranked according to
their performance in the final, assuming no ties.

Q: How many different rankings are possible?

Q: If the women are ranked among themselves and the men among themselves
how many different rankings are possible?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 26

Example:

How many different letter arrangements can be formed using P,E,P ?

The brute-force approach

• Step 1: Permutations assuming P1 and P2 are different.


P1 P2 E P1 E P2
P2 P1 E P2 E P1
E P1 P2 E P2 P1

• Step 2: Removing duplicates assuming P1 and P2 are the same.


PPE PEP EPP

Q: How many duplicates?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 27

Example:

How many different letter arrangements can be formed using P,E,P,P,E,R ?

Answer:
6!
= 60
3! × 2! × 1!

Why?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 28

A More General Example of Permutations:

How many different permutations of n objects, of which n1 are identical, n2 are


identical, ..., nr are identical?

• Step 1: Assuming all n objects are distinct, there are n! permutations.

• Step 2: For each permutation, there are n1 ! × n2 ! × ... × nr ! ways to


switch objects and still remain the same permutation.

• Step 3: The total number of (unique) permutations are


n!
n1 ! × n2 ! × ... × nr !
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 29

1 2 3 4 5 6 7 8 9 10 11

How many ways to switch the elements and remain the same permutation?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 30

Example:

A signal consists of 9 flags hung in a line, which can be made from a set of 4
white flags, 3 red flags, and 2 blue flags. All flags of the same color are identical.

Q: How many different signals are possible?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 31

Combinations

Basic question of combination: The number of different groups of r


objects that could be formed from a total of n objects.

Basic formula of combination:


 
n n!
=
r (n − r)! × r!

Pronounced as “n-choose-r ”
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 32

Basic Formula of Combination

Step 1: When orders are relevant, the number of different groups of r objects
formed from n objects = n × (n − 1) × (n − 2) × ... × (n − r + 1)
1 2 3 4 ... r

...

n choices n−1 choices n−2 choices ... n−r+1 choices

Step 2: When orders are NOT relevant, each group of r objects can be
formed by r! different ways.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 33

Step 3: Therefore, when orders are NOT relevant, the number of different groups
of r objects would be

n × (n − 1) × ... × (n − r + 1)
r!

n × (n − 1) × ... × (n − r + 1)×(n − r) × (n − r − 1) × ... × 1


=
r!×(n − r) × (n − r − 1) × ... × 1
n!
=
(n − r)! × r!
 
n
=
r
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 34

Example:

A committee of 3 is to be formed from a group of 20 people.

Q: How many different committees are possible?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 35

Example:

How many different committees consisting of 2 women and 3 men can be formed,
from a group of 5 women and 7 men?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 36

Example (1.4.4c):

Consider a set of n antennas of which m are defective and n − m are functional.

Q: How many linear orderings are there in which no two defectives are
consecutive?

n−m+1

A: m .

Line up n − m functional antennas. Insert (at most) one defective antenna


between two functional antennas. Considering the boundaries, there are in total
n − m + 1 places to insert defective antennas.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 37

n

Useful Facts about Binomial Coefficient
r

n
1, and nn
 
1. 0 = = 1. Recall 0! = 1.

n n
 
2. r = n−r

n n−1 n−1
  
3. r = r−1 + r

Proof 1 by algebra
n−1
 n−1
 (n−1)! (n−1)! (n−1)!(r+n−r)
r−1 + r = (n−r)!(r−1)! + (n−r−1)!r! = (n−r)!r!

Proof 2 by a combinatorial argument


A selection of r objects either contains object 1 or does not contain object 1.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 38

4. Some computational tricks

• Avoid factorial n!, which is easily very large. 170! = 7.2574 × 10306

n n
 
• Take advantage of r = n−r

• Convert products to additions


 
n n(n − 1)(n − 2)...(n − r + 1)
=
r r!
 
Xr−1 r
X 
= exp log(n − j) − log j
 
j=0 j=1

• Use accurate approximations of the log factorial function (log n!). Check
http://en.wikipedia.org/wiki/Factorial.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 39

The Binomial Theorem

The Binomial Theorem:


n  
n
X n
(x + y) = xk y n−k
k
k=0

Example for n = 2:
2 2
2 2 2
1, 21
  
(x + y) = x + 2xy + y , 0 = 2 = =2
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 40

Proof of the Binomial Theorem by Induction

Two key components in proof by induction:

1. Verify a base case, e.g., n = 1.


2. Assume the case n − 1 is true, prove for the case n.

Proof:

The base case is trivially true (but always check ! !)

Assume
n−1
X 
n − 1 k n−1−k
(x + y)n−1 = x y
k
k=0
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 41

Then
n    n−1
X
X n n k n−k
xk y n−k = x y + xn + y n
k k
k=0 k=1
n−1
X n − 1 n−1
X n − 1
= xk y n−k + xk y n−k + xn + y n
k k−1
k=1 k=1
n−1
X n − 1 n−1
X n − 1
=y xk y n−1−k − y n + xk y n−k + xn + y n
k k−1
k=0 k=1
n−2
X n − 1
=y(x + y)n−1 + xj+1 y n−1−j + xn , (let j = k − 1)
j=0
j
n−1
X 
n − 1 j n−1−j
=y(x + y)n−1 + x x y − xn + xn
j=0
j

=y(x + y)n−1 + x(x + y)n−1 = (x + y)n


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 42

Multinomial Coefficients

A set of n distinct items is to be divided into r distinct groups of respective size


Pr
n1 , n2 , ..., nr , where i=1 ni = n.

Q: How many different divisions are possible?

Answer:
 
n n!
=
n1 , n2 , ..., nr n1 !n2 !...nr !
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 43

Proof
n

Step 1: There are n ways to form the first group of size n1 .
1

n−n1

Step 2: There are n2 ways to form the second group of size n2 .

...
n−n1 −n2 −...−nr−1

Step r : There are nr ways to form the last group of size nr .

Therefore, the total number of divisions should be


     
n n − n1 n − n1 − n2 − ... − nr−1
× × ... ×
n1 n2 nr
n! (n − n1 )! (n − n1 − ... − nr−1 )!
= × × ... ×
(n − n1 )!n1 ! (n − n1 − n2 )!n2 ! (n − n1 − ...nr−1 − nr )!nr !
n!
=
n1 !n2 !...nr !
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 44

Example:

A police department in a small city consists of 10 officers. The policy is to have 5


patrolling the streets, 2 working full time at the station, and 3 on reserve at the
station.

Q: How many different divisions of 10 officers into 3 groups are possible?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 45

The Number of Integer Solutions of Equations

Suppose x1 , x2 , ..., xr , are positive, and x1 + x2 + ... + xr = n.

Q: How many distinct vectors (x1 , x2 , ..., xr ) can satisfy the constraints.

A:
 
n−1
r−1

Line up n 1’s. Draw r − 1 lines from in total n − 1 places to form r groups


(numbers).
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 46

Suppose x1 , x2 , ..., xr , are non-negative, and x1 + x2 + ... + xr = n.

Q: How many distinct vectors (x1 , x2 , ..., xr ) can satisfy the constraints.

A:
 
n+r−1
,
r−1
by padding r zeros after n one’s.

One can also let yi = xi + 1, i = 1 to r . Then the problem becomes finding


positive integers (y1 , ..., yr ) so that y1 + y2 + ... + yr = n + r .
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 47

Example:

8 identical blackboards are to be divided among 4 schools.

Q (a): How many divisions are possible?

Q (b): How many, if each school must receive at least 1 blackboard?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 48

More Examples for Chapter 1

From a group of 8 women and 6 men a committee consisting of 3 men and 3


women is to be formed.

(a): How many different committees are possible?

(b): How many, if 2 of the men refuse to serve together?

(c): How many, if 2 of the women refuse to serve together?

(d): How many, if 1 man and 1 woman refuse to serve together?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 49

Solution:

(a): How many different committees are possible?

8 6
 
3 × 3 = 56 × 20 = 1120
(b): How many, if 2 of the men refuse to serve together?

8
  4 2
 4
3 × 3 + 1 2 = 56 × 16 = 896
(c): How many, if 2 of the women refuse to serve together?

 6 2
 6 6

3 + 1 2 × 3 = 50 × 20 = 1000
(d): How many, if 1 man and 1 woman refuse to serve together?

7
 5 7 5
  7 5
 
3 3  + 3 2  + 2 3 = 350 + 350 + 210 = 910
8 6 7 5
Or 3 3 − 2 2 = 910
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 50

Fermat’s Combinatorial Identity

Theoretical Exercise 11.

  Xn  
n i−1
= , n≥k
k k−1
i=k

There is an interesting proof (different from the book) using the result for the
number of integer solutions of equations, by a recursive argument.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 51

Proof:

Let f (m, r) be the number of positive integer solutions, i.e.,


x1 + x2 + ... + xr = m.
m−1

We know f (m, r) = r−1 .

Suppose m = n + 1 and r = k + 1, Then


   
m−1 n
f (n + 1, k + 1) = =
r−1 k

x1 can take values from 1 to (n + 1) − k , because xi ≥ 1, i = 1 to k + 1.


Decompose the problem into (n + 1) − k sub-problems by fixing a value for x1 .

If x1= 1, then x2 + x3 + ... + xk+1 = m − 1 = n.


The total number of positive integer solutions is f (n, k).
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 52

n−1

If x1 = 1, there are still f (n, k) possibilities, i.e., k−1 .

n−2

If x1 = 2, there are still f (n − 1, k) possibilities, i.e., k−1

...
k−1

If x1 = (n + 1) − k , there are still f (k, k) possibilities, i.e., k−1

Thus, by additive counting principle,

n+1−k n+1−k
X  
X n−j
f (n + 1, k + 1) = f (n + 1 − j, k) =
j=1 j=1
k−1

But, is this what we want? Yes!


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 53

By additive counting principle,

n+1−k
X
f (n + 1, k + 1) = f (n + 1 − j, k)
j=1
n+1−k
X  
n−j
=
j=1
k−1
     
n−1 n−2 k−1
= + + ... +
k−1 k−1 k−1
     
k−1 k n−1
= + + ... +
k−1 k−1 k−1
n  
X i−1
=
k−1
i=k
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 54

Example: A company has 50 employees, 10 of whom are in management


positions.

Q: How many ways are there to choose a committee consisting of 4 people, if


there must be at least one person of management rank?

A:
    4   
50 40 X 10 40
− = 138910 =
4 4 k 4−k
k=1

———-
10 49
 
Can we also compute the answer using 1 3 = 184240? No.
10 49
 
It is not correct because the two sub-tasks, 1 and 3 , are overlapping; and
hence we can not simply apply the multiplicative principle.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 55

Axioms of Probability

Chapter 2, Sections 2.1 - 2.5, 2.7


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 56

Section 2.1

In this chapter we introduce the concept of the probability of an event and then
show how these probabilities can be computed in certain situations. As a
preliminary, however, we need the concept of the sample space and the events of
an experiment.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 57

Sample Space and Events, Section 2.2

Definition of Sample Space:

The sample space of an experiment, denoted by S , is the set of all possible


outcomes of the experiment.

Example:
S = {girl, boy}, if the output of an experiment consists of the determination of
the gender of a newborn.

Example:
If the experiment consists of flipping two coins (H/T), the the sample space
S = {(H, H), (H, T ), (T, H), (T, T )}
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 58

Example:
If the outcome of an experiment is the order of finish in a race among 7 horses
having post positions, 1, 2, 3, 4, 5, 6, 7, then
S = {all 7! permutations of (1, 2, 3, 4, 5, 6, 7)}

Example:
If the experiment consists of tossing two dice, then
S = {(i, j), i, j = 1, 2, 3, 4, 5, 6}, What is the size |S|?

Example:
If the experiment consists of measuring (in hours) the lifetime of a transistor, then
S = {t : 0 ≤ t ≤ ∞}.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 59

Definition of Events:

Any subset E of the sample space S is an event.

Example:
S = {(H, H), (H, T ), (T, H), (T, T )},
E = {(H, H)} is an event. E = {(H, T ), (T, T )} is also an event

Example:
S = {t : 0 ≤ t ≤ ∞},
E = {t : 1 ≤ t ≤ 5} is an event,
But {t : −10 ≤ t ≤ 5} is NOT an event.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 60

Set Operations on Events

Let A, B , C be events of the same sample space S


A ⊂ S, B ⊂ S, C ⊂ S. Notation for subset: ⊂

Set Union:
A ∪ B = the event which occurs if either A OR B occurs.

Example:
S = {1, 2, 3, 4, 5, ...} = the set of all positive integers.
A = {1}, B = {1, 3, 5}, C = {2, 4, 6},

A ∪ B = {1, 3, 5}, B ∪ C = {1, 2, 3, 4, 5, 6},


A ∪ B ∪ C = {1, 2, 3, 4, 5, 6}
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 61

Set Intersection:
A ∩ B = AB = the event which occurs if both A AND B occur.

Example:
S = {1, 2, 3, 4, 5, ...} = the set of all positive integers.
A = {1}, B = {1, 3, 5}, C = {2, 4, 6},

A ∩ B = {1}, B ∩ C = {} = ∅, A∩B∩C =∅

Mutually Exclusive:
If A ∩ B = ∅, then A and B are mutually exclusive.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 62

Set Complement:
Ac = S − A = the event which occurs only if A does not occur.

Example:
S = {1, 2, 3, 4, 5, ...} = the set of all positive integers.
A = {1, 3, 5, ...} = the set of all positive odd integers.
B = {2, 4, 6, ...} = the set of all positive even integers.

Ac = S − A = B , B c = S − B = A.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 63

Laws of Set Operations

Commutative Laws:
A∪B =B∪A A∩B =B∩A

Associative Laws:
(A ∪ B) ∪ C = A ∪ (B ∪ C) = A ∪ B ∪ C
(A ∩ B) ∩ C = A ∩ (B ∩ C) = A ∩ B ∩ C = ABC

Distributive Laws:
(A ∪ B) ∩ C = (A ∩ C) ∪ (B ∩ C) = AC ∪ BC
(A ∩ B) ∪ C = (A ∪ C) ∩ (B ∪ C)

These laws can be verified by Venn diagrams.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 64

DeMorgan’s Laws

n
!c n
[ \
Ei = Eic
i=1 i=1

n
!c n
\ [
Ei = Eic
i=1 i=1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 65

Sn c Tn c
Proof of ( i=1 Ei ) = i=1 Ei

Sn c Tn
Let A = ( i=1 Ei ) and B = i=1 Eic

1. Prove A ⊆B
Suppose x∈ A. =⇒ x does not belong to any Ei .
Tn
=⇒ x belongs to every Ei . =⇒ x ∈ i=1 Eic = B .
c

2. Proof B ⊆A
Suppose x∈ B . =⇒ x belongs to every Eic .
Sn c
=⇒ x does not belong to any Ei . =⇒ x ∈ ( i=1 Ei ) = A.
3. Consequently A =B
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 66

Tn c Sn c
Proof of ( i=1 Ei ) = E
i=1 i

Let Fi = Eic .
Sn c Tn
We have proved ( i=1 Fi ) = i=1 Fic

n
!c n
!c
\ \
Ei = Fic
i=1 i=1
n
!c !c n n
[ [ [
= Fi = Fi = Eic
i=1 i=1 i=1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 67

Axioms of Probability, Section 2.3

For each event E in the sample space S , there exists a value P (E), referred to
as the probability of E .

P (E) satisfies three axioms.


Axiom 1: 0 ≤ P (E) ≤ 1

Axiom 2: P (S) = 1

Axiom 3: For any sequence of mutually exclusive events, E1 , E2 , ..., (that


is, Ei Ej = ∅ whenever i 6= j ), then
∞ ∞
!
[ X
P Ei = P (Ei )
i=1 i=1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 68

Consequences of Three Axioms

Consequence 1: P (∅) = 0.

P (S) = P (S ∪ ∅) = P (S) + P (∅) =⇒ P (∅) = 0

Consequence 2:
Sn
If S= i=1 Ei and all mutually-exclusive events Ei are equally likely, meaning
P (E1 ) = P (E2 ) = ... = P (En ), then
1
P (Ei ) = , i = 1 to n
n
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 69

Some Simple Propositions, Section 2.4

Proposition 1: P (E c ) = 1 − P (E)

Proof: S = E ∪ E c =⇒ 1 = P (S) = P (E) + P (E c )

Proposition 2: If E ⊂ F , then P (E) ≤ P (F )

Proof:
E ⊂ F =⇒ F = E ∪ (E c ∩ F )
=⇒ P (F ) = P (E) + P (E c ∩ F ) ≥ P (E)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 70

Proposition 3: P (E ∪ F ) = P (E) + P (F ) − P (EF )

Proof:
E ∪ F = (E − EF )∪(F − EF )∪EF , union of three mutually exclusive sets.

=⇒ P (E ∪ F ) = P (E − EF ) + P (F − EF ) + P (EF )
= P (E) − P (EF ) + P (F ) − P (EF ) + P (EF )
= P (E) + P (F ) − P (EF )

Note that E = EF ∪ (E − EF ), union of two mutually exclusive sets.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 71

A Generalization of Proposition 3:

P (E ∪ F ∪ G)
=P (E) + P (F ) + P (G) − P (EF ) − P (EG) − P (F G) + P (EF G)

Proof:
Let B = F ∪ G. Then P (E ∪ B) = P (E) + P (B) − P (EB)

P (B) = P (F ∪ G) = P (F ) + P (G) − P (F G)

P (EB) = P (E(F ∪ G)) = P (EF ∪ EG) =


P (EF ) + P (EG) − P (EF EG) = P (EF ) + P (EG) − P (EF G)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 72

Proposition 4: P (E1 ∪ E2 ∪ ... ∪ En ) = ?

Read this proposition and its proof yourself.

We will temporarily skip this proposition and relevant examples 5m, 5n, and 5o.
However, we will come back to those material at a later time.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 73

Example:

A total of 36 members of a club play tennis, 28 play squash, and 18 play


badminton. Furthermore, 22 of the members play both tennis and squash, 12
play both tennis and badminton, 9 play both squash and badminton, and 4 play all
three sports.

Q: How many members of this club play at least one of these sports?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 74

Solution:

A = { members playing tennis }


B = { members playing squash }
C = { members playing badminton }
|A ∪ B ∪ C| =??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 75

Sample Space Having Equally Likely Outcomes, Sec. 2.5

Assumption: In an experiment, all outcomes in the sample space S are equally


likely to occur.

For example, S = {1, 2, 3, ..., N }. Assuming equally likely outcomes, then


1
P ({i}) = , i = 1 to N
N

For any event E :

|E|
Number of outcomes in E
P (E) = =
Number of outcomes in S |S|
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 76

Example: If two dice are rolled, what is the probability that the sum of the
upturned faces will equal 7?

S = {(i, j), i, j = 1, 2, 3, 4, 5, 6}

E = {??}
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 77

Example: If 3 balls are randomly drawn from a bag containing 6 white and 5
black balls, what is the probability that one of the drawn balls is white and the
other two black?

S = {??}

E = {??}
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 78

Example: An urn contains n balls, of which one is special. If k of these balls


are withdrawn randomly, which is the probability that the special ball is chosen?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 79

Revisit Probabilistic Counting of Set Intersections

Consider two sets S1 , S2 ⊂ Ω = {0, 1, 2, ..., D − 1}. For example,

S1 = {0, 15, 31, 193, 261, 495, 563, 919}


S2 = {5, 15, 19, 193, 209, 325, 563, 1029, 1921}

One can first generate a random permutation π :

π : Ω = {0, 1, 2, ..., D − 1} −→ {0, 1, 2, ..., D − 1}

Apply π to sets S1 and S2 . For example

π (S1 ) = {13, 139, 476, 597, 698, 1355, 1932, 2532}


π (S2 ) = {13, 236, 476, 683, 798, 1456, 1968, 2532, 3983}
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 80

Store only the minimums

min{π (S1 )} = 13, min{π (S2 )} = 13

Suppose we can theoretically calculate the probability

P = Pr (min{π (S1 )} = min{π (S2 )})

S1 and S2 are more similar =⇒ larger P (i.e., closer to 1).

Generate (small) k random permutations and every times store the minimums.

Count the number of matches to estimate the similarity between S1 and S2 .


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 81

Now, we can easily (How?) show that

|S1 ∩ S2 | a
P = Pr (min{π (S1 )} = min{π (S2 )}) = =
|S1 ∪ S2 | f1 + f2 − a
where a = |S1 ∩ S2 |, f1 = |S1 |, f2 = |S2 |.

Therefore, we can estimate P as a binomial probability from k permutations.

How many permutations (value of k ) we need?:


Depend on the variance of this binomial distribution.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 82

Poker Example (2.5.5f): A poker hand consists of 5 cards. If the cards have
distinct consecutive values and are not all of the same suit, we say that the hand
is a straight. What is the probability that one is dealt a straight?

A 2 3 4 5 6 7 8 9 10 J Q K A
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 83

Example: A 5-card poker hand is a full house if it consists of 3 cards of the


same denomination and 2 cards of the same denomination. (That is, a full house
is three of a kind plus a pair.) What is the probability that one is dealt a full house?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 84

Example (The Birthday Attack): If n people are present in a room, what is the
probability that no two of them celebrate their birthday on the same day of the
year (365 days) ? How large need n be so that this probability is less than 12 ?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 85

Example (The Birthday Attack): If n people are present in a room, what is the
probability that no two of them celebrate their birthday on the same day of the
year? How large need n be so that this probability is less than 12 ?

365 × 364 × ... × (365 − n + 1)


p(n) =
365n
365 364 365 − n + 1
= × × ... ×
365 365 365

p(23) = 0.4927 < 21 , p(50) = 0.0296


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 86

0.9

0.8

0.7

0.6
p(n)

0.5

0.4

0.3

0.2

0.1

0
0 10 20 30 40 50 60
n
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 87

Matlab Code

n = 5:60;
for i = 1:length(n)
p(i) = 1;
for j = 2:n(i);
p(i) = p(i) * (365-j+1)/365;
end;
end;
figure; plot(n,p,’linewidth’,2); grid on; box on;
xlabel(’n’); ylabel(’p(n)’);

How to improve the efficiency for computing many n’s?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 88

Example: A football team consists of 20 offensive and 20 defensive players.


The players are to be paired into groups of 2 for the purpose of determining
roommates. The pairing is done at random.

(a) What is the probability that there are no offensive-defensive roommate pairs?

(b) What is the probability that there are 2i offensive-defensive roommate pairs?

Denote the answer to (b) by P2i . The answer to (a) is then P0 .


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 89

Solution Steps:

1. Total ways of dividing the 40 players into 20 unordered pairs.

2. Ways of pairing 2i offensive-defensive pairs.

3. Ways of pairing 20 − 2i remaining players within each group.

4. Applying multiplicative principle and equally likely outcome assumption.

5. Obtaining numerical probability values.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 90

Solution Steps:

1. Total ways of dividing the 40 players into 20 unordered pairs.


 
40 (40)!
ordered =
2, 2, ..., 2 (2!)20
(40)!
unordered
(2!)20 (20)!

2. Ways of pairing 2i offensive-defensive pairs.


   
20 20
× × (2i)!
2i 2i

3. Ways of pairing 20 − 2i remaining players within each group.


 2
(20 − 2i)!
210−i (10 − i)!
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 91

4. Applying multiplicative principle and equally likely outcome assumption.


2 h i2
 20 (20−2i)!
2i (2i)! 2 10−i (10−i)!
P2i = 40!
220 20!

5. Obtaining numerical probability values.

n+1/2 −n

Stirling’s approximation: n! ≈ n e 2π

Be careful!

• Better approximations are available.


n

• n! could easily overflow, so could k .

• The probability never overflows, P ∈ [0, 1].


• Looking for possible cancelations.
• Rearranging terms in numerator and denominator to avoid overflow.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 92

Conditional Probability and Independence

Chapter 3, Sections 3.1 - 3.5


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 93

Introduction, Section 3.1

Conditional Probability: one of the most important concepts in probability:

• Calculating probabilities when partial (side) information is available.

• Computing desired probabilities may be easier.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 94

A Famous Brain-Teaser

• There are 3 doors.


• Behind one door is a prize.
• Behind the other 2 is nothing.
• First, you pick one of the 3 doors at random.
• One of the doors that you did not pick will be opened to reveal nothing.

• Q: Do you stick with the door you chose in the first place, or do you switch?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 95

A Joke: Tom & Jerry Conversation

• Jerry: I’ve heard that you will take a plane next week. Are you afraid that
there might be terrist activities?

• Tom: Yes. That’s exactly why I am prepared.


• Jerry: How?
1
• Tom: It is reported that the chance of having a bomb in a plane is 106 .
According to what I learned in my probability class (4080), the chance of
having two bombs in the plane will be 10112 .

• Jerry: So?
1 1
• Tom: Therefore, to dramatically reduce the chance from 106 to 1012 , I will
bring one bomb to the plane myself!

• Jerry: AH...
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 96

Section 3.2, Conditional Probabilities

Example: Suppose we toss a fair dice twice.

• What is the probability that the sum of the 2 dice is 8?

Sample space S= {??}


Event E = {??}

• What if we know the first dice is 3?

(Reduced) S = {??} (Reduced) E = {??}


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 97

The Basic Formula of Conditional Probability

Two events E, F ⊆ S.
P (E|F ): conditional probability that E occurs given that F occurs.

If P (F ) > 0, then
P (EF )
P (E|F ) =
P (F )
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 98

The Basic Formula of Conditional Probability

P (EF )
P (E|F ) =
P (F )

How to understand this formula? —- Venn diagram

Given that F occurs,

• What is the new (ie, reduced) sample space S ′ ?


• What is the new (ie, reduced) event E ′ ?
|E ′ | |E ′ |/S
• P (E|F ) = |S ′ | = |S ′ |/S =??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 99

Example (3.2.2c):

In the card game bridge the 52 cards are dealt out equally to 4 players — called
East, West, North, and South. If North and South have a total of 8 spades among
them, what is the probability that East has 3 of the remaining 5 spades?

The reduced sample space S ′= {??}


The reduced event E ′ = {??}
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 100

A transformation of the basic formula

P (EF )
P (E|F ) =
P (F )

=⇒

P (EF ) = P (E|F ) × P (F ) = P (F |E) × P (E)

P (E|F ) may be easier to compute than P (F |E), or vice versa.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 101

Example:

A total of n balls are sequentially and randomly chosen, without replacement,


from an urn containing r red and b blue balls (n ≤ r + b).
Given that k of the n balls are blue, what is the probability that the first ball
chosen is blue?

Two solutions:

1. Working with the reduced sample space — Easy!


Basically a problem of choosing one ball from n balls randomly.

2. Working with the original sample space and basic formula — More difficult!
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 102

Working with original sample space:

Event B : the first ball chosen is blue.

Event Bk : a total of k blue balls are chosen.

Desired probability:

P (BBk ) P (Bk |B)P (B)


P (B|Bk ) = =
P (Bk ) P (Bk )

P (B) =?? P (Bk ) =?? P (Bk |B) =??


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 103

Desired probability:

P (BBk ) P (Bk |B)P (B)


P (B|Bk ) = =
P (Bk ) P (Bk )

b r b−1 r
   
b k n−k k−1 n−k
P (B) = , P (Bk ) = r+b
, P (Bk |B) = r+b−1
r+b
 
n n−1

Therefore,
b−1 r r+b
  
P (Bk |B)P (B) k−1 n−k b n k
P (B|Bk ) = = r+b−1
× × b r
=
P (Bk ) r+b n
  
n−1 k n−k
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 104

Example (3.2.2f):

Suppose an urn contains 8 red balls and 4 white balls. We draw 2 balls from the
urn without replacement. Assume that at each draw each ball in the urn is equally
likely to be chosen.

Q: what is probability that both balls drawn are red?

Two solutions:

1. The direct approach

2. The conditional probability approach


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 105

Two solutions:

1. The direct approach


8

2 14
12
 =
2
33

2. The conditional probability approach

8 7 14
× =
12 11 33
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 106

The Multiplication Rule

An extremely important formula in science

P (E1 E2 E3 ...En ) = P (E1 )P (E2 |E1 )P (E3 |E1 E2 )...P (En |E1 E2 ...En−1 )

Proof?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 107

Example (3.2.2g):

An ordinary deck of 52 playing cards is randomly divided into 4 piles of 13 cards


each.

Q: Compute the probability that each pile has exactly 1 ace.

A: Two solutions:

1. Direct approach.

2. Conditional probability (as in the textbook).


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 108

Solution by the direct approach

• The size of sample space)


   
52 39 26 52! 52!
= =
13 13 13 13! × 13! × 13! × 13! (13!)4
• Fix 4 aces in four piles
• Only need to select 12 cards for each pile (from 48 cards total)
   
48 36 24 48!
=
12 12 12 (12!)4

• Consider 4! permutations
• The desired probability
48 36 24
  
12 12 12 48! (13!)4
52 39 26 × 4! = 4! = 0.1055
52! (12!)4
  
13 13 13
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 109

Solution using the conditional probability

Define

• E1 = {The ace of spades is in any one of the piles }


• E2 = {The ace of spades and the ace of hearts are in different piles }
• E3 = {The ace of spades, hearts, and diamonds are all in different piles }
• E4 = {All four aces are in different piles}

The desired probability

P (E1 E2 E3 E4 )
=P (E1 ) P (E2 |E1 ) P (E3 |E1 E2 ) P (E2 |E1 ) P (E4 |E1 E2 E3 )
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 110

• P (E1 ) = 1

39
• P (E2 |E1 ) = 51

26
• P (E3 |E1 E2 ) = 50

13
• P (E4 |E1 E2 E3 ) = 49

39 26 13
• P (E1 E2 E3 E4 ) = 51 50 49 = 0.1055.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 111

Section 3.3, Baye’s Formula

The basic Baye’s formula

P (E) = P (E|F )P (F ) + P (E|F c ) (1 − P (F ))

Proof:

• By Venn diagram. Or
• By algebra using the conditional probability formula
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 112

Example (3.3.3a):

An accident-prone person will have an accident within a fixed one-year period


with probability 0.4. This probability decreases to 0.2 for a non-accident-prone
person. Suppose 30% of the population is accident prone.

Q 1: What is the probability that a random person will have an accident within a
fixed one-year period?

Q 2: Suppose a person has an accident within a year. What is the probability that
he/she is accident prone?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 113

Example:

Let p be the probability that the student knows the answer and 1 − p the
probability that the student guesses. Assume that a student who guesses will be
1
correct with probability m , where m is the total number of multiple-choice
alternatives.

Q: What is the conditional probability that a student knew the answer, given that
he/she answered it correctly?

Solution:

Let C = event that the student answers the question correctly.

Let K = event that he/she actually knows the answer.

P (K|C) =??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 114

Solution:

Let C = event that the student answers the question correctly.

Let K = event that he/she actually knows the answer.

P (KC) P (C|K)P (K)


P (K|C) = =
P (C) P (C|K)P (K) + P (C|K c )P (K c )
1×p
= 1
1×p+ m (1 − p)
mp
=
1 + (m − 1)p
————

What happens when m is large?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 115

Example (3.3.3d):

A lab blood test is 95% effective in detecting a certain disease when it is


represent. However, the test also yields a false positive result for 1% of the
healthy tested. Assume 0.5% of the population actually has the disease.

Q: What is the probability that a person has the disease given that the test result
is positive?

Solution:

Let D = event that the tested person has the disease.

Let E = event that the test result is positive.

P (D|E) = ??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 116

The Generalized Baye’s Formula

Sn
Suppose F1 , F2 , ..., Fn are mutually exclusive events such as i=1 Fi = S ,
then
n
X
P (E) = P (E|Fi )P (Fi ),
i=1

which is easy to understand from the Venn diagram.

P (EFj ) P (E|Fj )P (Fj )


P (Fj |E) = = n
P
P (E) i=1 P (E|Fi )P (Fi )
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 117

Example (3.3.3m):

A bin contains 3 different types of disposable flashlights. The probability that a


type 1 flashlight will give over 100 hours of use is 0.7, with the corresponding
probabilities for type 2 and type 3 flashlights being 0.4 and 0.3 respectively.
Suppose that 20% of the flashlights in the bin are type 1, and 30% are type 2,
and 50% are type 3.

Q 1: What is the probability that a randomly chosen flashlight will give more than
100 hours of use?

Q 2: Given the flashlight lasted over 100 hours, what is the conditional probability
that it was type j ?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 118

Q 1: What is the probability that a randomly chosen flashlight will give more than
100 hours of use?

0.7 × 0.2 + 0.4 × 0.3 + 0.3 × 0.5 = 0.14 + 0.12 + 0.15 = 0.41

Q 2: Given the flashlight lasted over 100 hours, what is the conditional probability
that it was type j ?

0.14
Type 1: 0.41 = 0.3415
0.12
Type 2: 0.41 = 0.3927
0.15
Type 3: 0.41 = 0.3659
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 119

Independent Events

Definition:

Two events E and F are said to be independent if

P (EF ) = P (E)P (F )

Consequently, if E and F are independent, then

P (E|F ) = P (E), P (F |E) = P (F ).

Independent is different from mutually exclusive!


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 120

Check Independence by Verifying P (EF ) = P (E)P (F )

Example: Two coins are flipped and all 4 outcomes are equally likely. Let E be
the event that the first lands heads and F the event that the second lands tails.

Verify that E and F are independent.

Example: Suppose we toss 2 fair dice. Let E denote the event that the sum of
two dice is 6 and F denote the event that the first die equals 4.

Verify that E and F are not independent.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 121

Proposition 4.1:

If E and F are independent, then so are E and F c .

Proof:

Given P (EF ) = P (E)P (F )


Need to show P (EF c ) = P (E)P (F c ).

Which formula contains both P (EF ) and P (EF c )?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 122

Independence of Three Events

Definition:

The three events, E , F ,and G, are said to be independent if

P (EF G) = P (E)P (F )P (G)


P (EF ) = P (E)P (F )
P (EG) = P (E)P (G)
P (F G) = P (F )P (G)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 123

Independence of More Than Three Events

Definition:

The events, Ei , i = 1 to n, are said to be independent if, for every


E1 , E2 , ..., Er , r ≤ n, of these events,

P (E1 E2 ...Er ) = P (E1 )P (E2 )...P (Er )


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 124

Example (3.4.4f):

An infinite sequence of independent trials is to be performed. Each trial results in


a success with probability p and a failure with probability 1 − p.

Q: What is the probability that

1. at least one success occurs in the first n trials.

2. exactly k successes occur in the first n trials.

3. all trials result in successes?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 125

Solution:

Q: What is the probability that

1. at least one success occurs in the first n trials.

1 − (1 − p)n
2. exactly k successes occur in the first n trials.

n
pk (1 − p)n−k

k
How is this formula related to the binomial theorem?

3. all trials result in successes?

pn → 0 if p < 1.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 126

Example (3.4.4h):

Independent trials, consisting of rolling a pair of fair dice, are performed. What is
the probability that an outcome of 5 appears before an outcome of 7 when the
outcome is the sum of the dice?

Solution 1:

Let En denote the event that no 5 or 7 appears on the first n − 1 trials and a 5
appears on the nth trial, then the desired probability is
∞ ∞
!
[ X
P En = P (En )
n=1 n=1

Why Ei and Ej are mutually exclusive?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 127

P (En ) = P ( no 5 or 7 on the first n − 1 trials AND a 5 on the nth trial)

By independence
P (En ) = P ( no 5 or 7 on the first n − 1 trials)×P ( a 5 on the nth trial)

P ( a 5 on the nth trial) = P (a 5 on any trial)

n−1
P ( no 5 or 7 on the first n − 1 trials) = [P (no 5 or 7 on any trial)]

P (no 5 or 7 on any trial) = 1 − P (either a 5 OR a 7 on any trial)

P (either a 5 OR a 7 on any trial) = P (a 5 on any trial) + P (a 7 on any trial)


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 128

P (En ) = P ( no 5 or 7 on the first n − 1 trials)×P ( a 5 on the nth trial)

 n−1  n−1
4 6 4 1 13
P (En ) = 1− − =
36 36 36 9 18

The desired probability


∞ ∞
!
[ X 1 1 2
P En = P (En ) = 13 = 5 .
n=1 n=1
9 1 − 18
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 129

Solution 2 (using a conditional argument):

Let E = the desired event.

Let F = the event that the first trial is a 5.

Let G = the event that the first trial is a 7.

Let H = the event that the first trial is neither a 5 nor 7.

P (E) = P (E|F )P (F ) + P (E|G)P (G) + P (E|H)P (H)


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 130

A General Result:

If E and F are mutually exclusive events of an experiment, then when


independent trials of this experiment are performed, E will occur before F with
probability

P (E)
P (E) + P (F )
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 131

The Gambler’s Ruin Problem (Example 3.4.4k)

Two gamblers, A and B , bet on the outcomes of successive flips of a coin. On


each flip, if the coin comes up head, A collects 1 unit from B , whereas if it comes
up tail, A pays 1 unit to B . They continue until one of them runs out of money.

If it is assumed that the successive flips of the coin are independent and each flip
results in a head with probability p, what is the probability that A ends up with all
the money if he starts with i units and B starts with N − i units?

Intuition: What is this probability if the coin is fair (i.e., p = 1/2)?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 132

Solution:

E = event that A ends with all the money when he starts with i.
H = event that first flip lands heads.

Pi = P (E) = P (E|H)P (H) + P (E|H c )P (H c )

P0 =??
PN =??
P (E|H) =??
P (E|H c ) =??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 133

Pi = P (E) =P (E|H)P (H) + P (E|H c )P (H c )


=pPi+1 + (1 − p)Pi−1 = pPi+1 + qPi−1

Pi − qPi−1 q
=⇒Pi+1 = =⇒ Pi+1 − Pi = (Pi − Pi−1 )
p p

q
P2 − P1 = P1
p
 2
q q
P3 − P2 = (P2 − P1 ) = P1
p p
...
 i−1
q
Pi − Pi−1 = P1
p
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 134

"   2  i−1 #
q q q
Pi =P1 1 + + + ... +
p p p
  i 
q
1 − p
=P1  if q 6= p
  
q
1− p

Because PN = 1, we know
 
1 − pq
P1 =  N if q 6= p
1 − pq
 i
1 − pq
Pi =  N if q 6= p
1 − pq

(Do we really have to worry about q = p = 12 ?)


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 135

The Matching Problem (Example 3.5.5d)

At a party n men take off their hats. The hats are then mixed up, and each man
randomly selects one. We say that a match occurs if a man selects his own hat.

Q: What is the probability of


(a) no matches;

(b) exactly k matches?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 136

Solution (a): using recursion and conditional probabilities.

1. Let E = event that no matches occurs. Denote Pn = P (E).


Let M = event that the first man selects his own hat.

2. Pn = P (E) = P (E|M )P (M ) + P (E|M c )P (M c )

3. Express Pn as a function a Pn−1 , Pn−2 , etc.

4. Identify the boundary conditions, eg P1 =?, P2 =?.

5. Prove Pn , eg, by induction.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 137

Solution (a): using recursion and conditional probabilities.

1. Let E = event that no matches occurs. Denote Pn = P (E).


Let M = event that the first man selects his own hat.

2. Pn = P (E) = P (E|M )P (M ) + P (E|M c )P (M c )

n−1 n−1
P (E|M ) = 0, P (M c ) = n , Pn = c
n P (E|M )

3. Express Pn as a function a Pn−1 , Pn−2 , etc.

1 n−1
P (E|M c ) = n−1 Pn−2 + Pn−1 , Pn = n Pn−1 + n1 Pn−2 .
4. Identify the boundary conditions.
P1 = 0, P2 = 1/2.
5. Prove Pn by induction.
1 1 1 (−1)n
Pn = 2! − 3! + 4! − ... + n! .
(What is a good approximation of Pn ?)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 138

Solution (b):

1. Fix a group of k men.

2. The probability that these k men select their own hats.

3. The probability that none of the remaining n − k men select their own hats.

4. Multiplicative counting principle.

5. Adjustment for selecting k men.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 139

Solution (b):

1. Fix a group of k men.

2. The probability that these k men select their own hats.


1 1 1 1 (n−k)!
n n−1 n−2 ... n−k+1 = n!

3. The probability that none of the remaining n − k men select their own hats.
Pn−k
4. Multiplicative counting principle.
(n−k)!
n! × Pn−k
5. Adjustment for selecting k men.

(n−k)! n

n! Pn−k k
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 140

P (.|F ) Is A Probability

Conditional probabilities satisfy all the properties of ordinary probabilities.

Proposition 5.1

• 0 ≤ P (E|F ) ≤ 1
• P (S|F ) = 1
• If Ei , i = 1, 2, ..., are mutually exclusive events, then
∞ ∞
!
[ X
P Ei |F = P (Ei |F )
i=1 i=1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 141

Conditionally Independent Events

Definition:

Two events, E1 and E2 , are said to be conditionally independent given event F , if

P (E1 |E2 F ) = P (E1 |F )

or, equivalently

P (E1 E2 |F ) = P (E1 |F )P (E2 |F )


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 142

Laplace’s Rule of Succession (Example 3.5.5e)

There are k + 1 coins in a box. The ith coin will, when flipped, turn up heads with
probability i/k , i = 0, 1, ..., k . A coin is randomly selected from the box and is
then repeatedly flipped.

Q: If the first n flip all result in heads, what is the conditional probability that the
(n + 1)st flip will do likewise?

Key: Conditioning on that the ith coin is selected, the outcomes will be
conditionally independent.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 143

Solution:

Let Ci = event that the ith coin is selected.

Let Fn = event the first n flips all result in heads.

Let H = event that the (n + 1)st flip is a head.

The desired probability:

k
X
P (H|Fn ) = P (H|Fn Ci )P (Ci |Fn )
i=0

By conditional independence P (H|Fn Ci ) =??

Solve P (Ci |Fn ) using Baye’s formula.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 144

The desired probability:

k
X
P (H|Fn ) = P (H|Fn Ci )P (Ci |Fn )
i=0

i
By conditional independence P (H|Fn Ci ) = P (H|Ci ) = k

Using Baye’s formula


i n 1

P (Ci Fn ) P (Fn |Ci )P (Ci ) k k+1
P (Ci |Fn ) = = Pk = Pk j n 1
P (Fn )

j=0 P (Fn |Cj )P (Cj ) j=0 k k+1

The final answer:


Pk j n+1

j=0 k
P (H|Fn ) = Pk j n

j=0 k
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 145

Chapter 4: (Discrete) Random Variables

Definition:

A random variable is a function from the sample space S to the real numbers.

A discrete random variable is a random variable that takes only on a finite (or at
most a countably infinite) number of values.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 146

Example (1a):

Suppose the experiment consists of tossing 3 fair coins. Let Y denote the
number of heads appearing, then Y is a discrete random variable taking on one
of the values 0, 1, 2, 3 with probabilities
1
P (Y = 0) = P {(T, T, T )} =
8
3
P (Y = 1) = P {(H, T, T ), (T, H, T ), (T, T, H)} =
8
3
P (Y = 2) = P {(T, H, H), (H, T, H), (H, H, T )} =
8
1
P (Y = 3) = P {(H, H, H)} =
8
Check
3 3
!
[ X
P {Y = i} = P {Y = i} = 1.
i=0 i=0
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 147

Example (1c)

Independent trials, consisting of the flipping of a coin having probability p of


coming up heads, are continuously performed until either a head occurs or a total
of n flips is made. If we let X denote the number of times the coin is flipped, then
X is a random variable taking on one of the the values 1, 2, ..., n.

P (X = 1) = P {H} = p
P (X = 2) = P {(T, H)} = (1 − p)p
P (X = 3) = P {(T, T, H)} = (1 − p)2 p
...
P (X = n − 1) = (1 − p)n−2 p
P (X = n) = (1 − p)n−1
Pn
Check i=1 P (X = i) = 1??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 148

Example (1d)

3 balls are randomly chosen from an urn containing 3 white, 3 red, and 5 black
balls. Suppose that we win $1 for each white ball selected and lose $1 for each
red ball selected. Let X denote our total winnings from the experiments.

Q 1: What possible values can X take on?

Q 2: What are the respective probabilities?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 149

Solution 1: What possible values can X take on?

X ∈ {3, 2, 1, 0, −1, −2, −3}.

Solution 2: What are the respective probabilities?

5 3 3
   5
3 + 1 1 1 55
P (X = 0) = 11
 =
3
165
3

3 1
P (X = 3) = P (X = −3) = 11
 =
3
165
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 150

The Coupon Collection Problem (Example 1e)

There are N distinct types of coupons and each time one obtains a coupon
equally likely from one of the N types, independent of prior selections.

A random variable T = number of coupons that needs to be collected until one


obtains a complete set of at least one of each type.

Q: What is P (T = n)?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 151

Solution:

1. Suffice to study P (T
> n)
because P (T = n) = P (T > n) − P (T > n − 1)

2. Define event Aj = no type j coupon is collected among first n.

S 
N
3. P (T > n) = P j=1 Aj

Then what?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 152

 
N
[ X X
P Aj  = P (Aj ) − P (Aj1 Aj2 )
j=1 j j1 <j2
X
+ P (Aj1 Aj2 Aj3 ) + ... + (−1)N +1 P (A1 A2 ...AN )
j1 <j2 <j3
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 153

 n
N −1
P (Aj ) =
N
 n
N −2
P (Aj1 Aj2 ) =
N
...
 n
N −k
P (Aj1 Aj2 ...Ajk ) =
N
...
 n
1
P (Aj1 Aj2 ...AjN −1 ) =
N
P (A1 A2 ...AN ) = 0
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 154

 
N
[ X X
P (T > n) = P  Aj  = P (Aj ) − P (Aj1 Aj2 )
j=1 j j1 <j2
X
+ P (Aj1 Aj2 Aj3 ) + ... + (−1)N +1 P (A1 A2 ...AN )
j1 <j2 <j3
 n
  n    n
N −1 N N −2 N N −3
=N − +
N 2 N 3 N
   n
N N 1
+ ... + (−1)
N −1 N
N −1    n
X N N −j
= (−1)j+1
j=1
j N
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 155

Cumulative Distribution Function

For a random variable X , its cumulative distribution function F is defined as

F (x) = P {X ≤ x}

F (x) is a non-decreasing function:


If x ≤ y , then F (x) ≤ F (y).
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 156

Section 4.2: Discrete Random Variables

Discrete random variable A random variable X that take on at most a


countable number of possible values.

The probability mass function p(a) = P {X = a}

If X takes one of the values x1 , x2 , ..., then



X
p(xi ) ≥ 0 p(xi ) = 1
i=1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 157

Example (4.2.2a): The probability mass function of a random variable X is


cλi
given by p(i) = i! , i = 0, 1, 2, ....

Q: Find

• the relation between c and λ.


• P {X = 0};
• P {X > 2};
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 158

Solution:

• The relation between c and λ.


∞ ∞
X X λi
p(i) = 1 =⇒ c = 1 =⇒ ceλ = 1 =⇒ c = e−λ .
i=0 i=0
i!

• P {X = 0};

P {X = 0} = c = e−λ .

• P {X > 2};

P {X > 2} =1 − P {X = 0} − P {X = 1} − P {X = 2}
λ2
=1 − c − cλ − c
2
−λ −λ λ2 e−λ
=1 − e − λe − .
2
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 159

Section 4.3: Expected Value

Definition:

If X is a discrete random variable having a probability mass function p(x), the


expected value of X is defined by

X
E(X) = xp(x)
x;p(x)>0

The expected value is a weighted average of possible values that X can take on.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 160

Example Find E(x) where X is the outcome when we roll a fair die.

6 6
X 1 1X 7
E(X) = i = i=
i=1
6 6 i=1 2

Example Let I be an indicator variable for event A, if



 1 if A occurs
I=
 0 if Ac occurs

E(X) =??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 161

Example (3d) A school class of 120 students are driven in 3 buses to a


symphony. There are 36 students in one of the buses, 40 in another, and 44 in the
third bus. When the buses arrive, one of the 120 students is randomly chosen.
Let X denote the number of students on the bus of that randomly chosen
student, find E(X).

Solution:

X = {??}
P (X) =??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 162

Example (4a)

P {X = −1} = 0.2, P {X = 0} = 0.5, P {X = 1} = 0.2.

Find E(X 2 ).

Solution: Let Y = X 2.

P {Y = 0} =P {X = 0} = 0.5

P {Y = 1} =P {X = 1 OR X = −1}
=P {X = 1} + P {X = −1}
=0.2 + 0.3 = 0.5

Therefore

E(X 2 ) = E(Y ) = 0 × 0.5 + 1 × 0.5 = 0.5


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 163

Verify
X
P {X = xi }x2i
i

=P {X = −1}(−1)2 + P {X = 0}(0)2 + P {X = 1}(1)2


??
=0.5 = E(X 2 )
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 164

Section 4.4: Expectation of a Function of Random Variable

Proposition 4.1 If X is a discrete random variable that takes on one of the


values xi , i ≥ 1, with respective probabilities p(xi ), then for any real-valued
function g
X
E (g(X)) = g(xi )p(xi )
i

Proof: Group the i’s with the same g(xi ).


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 165

Proof:

Group the i’s with the same g(xi ).

X X X
g(xi )p(xi ) = g(xi )p(xi )
i j i:g(xi )=yj
X X
= yj p(xi )
j i:g(xi )=yj
X
= yj P {g(X) = yj }
j

=E(g(X)).
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 166

Corollary 4.1 If a and b are constants, then

E(aX + b) = aE(X) + b.

Proof Use the definition of E(X).

The nth moment


X
n
E(X ) = xn p(x)
x;p(x)>0
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 167

Section 4.5: Variance

The expectation E(X) measures the average of possible values of X .

The variance V ar(X) measures the variation, or spread of these values.

Definition If X is a random variable with mean µ, then the variance of X is


defined by

2
 
V ar(X) = E (X − µ)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 168

An alternative formula for V ar(X)

2
V ar(X) = E(X 2 ) − (E(X)) = E(X 2 ) − µ2

Proof:
2
 
V ar(X) =E (X − µ)
X
= (x − µ)2 p(x)
x
X
= (x2 − 2µx + µ2 )p(x)
x
X X X
2 2
= x p(x) − 2µ xp(x) + µ p(x)
x x x
=E(X 2 ) − 2µ2 + µ 2

=E(X 2 ) − µ2 .
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 169

Two important identities

If a and b are constants, then

V ar(a) = V ar(b) = 0

V ar(aX + b) = a2 V ar(X)

Proof: Use the definitions.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 170

E(aX + b) = aE(x) + b

V ar(aX + b) =E ((aX + b) − E(aX + b))2


=E ((aX + b) − aEX − b)2
2
=E (aX − aEX)
2
=a2 E (X − EX)
=a2 V ar(X)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 171

Section 4.6: Bernoulli and Binomial Random Variables

Bernoulli random variable X ∼ Bernoulli(p)

P {X = 1} = p(1) = p,
P {X = 0} = p(0) = 1 − p.

Properties:

E(X) = p
V ar(X) = p(1 − p)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 172

Binomial random variable

Definition A random variable X presents the number of total successes that


occur in n independent trials. Each trial results in a success with probability p and
a failure with probability 1 − p.

X ∼ Binomial(n, p).

The probability mass function


 
n i
p(i) = P {X = i} = p (1 − p)n−i
i
• Proof?
Pn
• i=0 p(i) = 1?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 173

A very useful representation:

X ∼ binomial(n, p) can be viewed as the sum of n independent Bernoulli


random variables with parameter p.

n
X
X= Yi , Yi ∼ Bernoulli(p)
i=1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 174

Example: Five fair coins are flipped. If the outcomes are assumed
independent, find the probability mass function of the numbers of heads obtained.

Solution:

• X = number of heads obtained.

• X ∼ binomial(n, p). n=??, p=??

• P {X = i} =?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 175

Properties of X ∼ binomial(n, p)

Expectation: E(X) = np.

Variance: V ar(X) = np(1 − p).

Proof:??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 176

n  
X n i
E(X) = i p (1 − p)n−i
i=0
i
n
X n!
= i pi (1 − p)n−i
i=0
i!(n − i)!
n
X n!
= i pi (1 − p)n−i
i=1
i!(n − i)!
n
X (n − 1)!
=np pi−1 (1 − p)(n−1)−(i−1)
i=1
(i − 1)!((n − 1) − (i − 1))!
n−1
X (n − 1)!
=np pj (1 − p)(n−1)−j
j=0
j!((n − 1) − j)!
=np
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 177

n   n
X n X n!
E(X 2 ) = i2 pi (1 − p)n−i = i2 pi (1 − p)n−i
i=0
i i=1
i!(n − i)!

n
X (n − 1)!
=np i pi−1 (1 − p)(n−1)−(i−1)
i=1
(i − 1)!((n − 1) − (i − 1))!

n−1
X (n − 1)!
=np (j + 1) pj (1 − p)(n−1)−j
j=0
j!((n − 1) − j)!

=npE(Y + 1) Y ∼ binomial(n − 1, p)
=np((n − 1)p + 1)

2
V ar(X) = E(X 2 ) − (E(X)) = np((n − 1)p + 1) − (np)2 = np(1 − p)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 178

What Happens When n → ∞ and np → λ?

X ∼ binomial(n, p), n → ∞ and np → λ

n!
P {X = i} = pi (1 − p)n−i
(n − i)!i!
(1 − p)n pi
= n(n − 1)(n − 2)...(n − i + 1)
i! (1 − p)i
n i
  
(1 − p) (np) n(n − 1)(n − 2)...(n − i + 1)
=
i! (1 − p)i ni
e−λ λi

i!
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 179

 n
λ
lim 1− = e−λ
n→∞ n

For fixed (small) i,

n(n − 1)(n − 2)...(n − i + 1)


lim =1
n→∞ (n − λ)i
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 180

Section 4.7: The Poisson Random Variable

Definition: A random variable X , taking on one of the values 0, 1, 2, ..., is a


Poisson random variable with parameter λ, if for some λ > 0,
i
λ
p(i) = P {X = i} = e−λ , i = 0, 1, 2, 3, ...
i!

Check
∞ ∞ i
−λ λ
X X
p(i) = e = 1??
i=0 i=0
i!
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 181

Facts about Poisson Distribution

• Poisson(λ) approximates binomial (n, p), when n large and np ≈ λ.

−λ i
 
n i n−i e λ
p (1 − p) ≈
i i!

• Poisson is widely used for modeling rare (p ≈ 0) events (but not too rare).
– The number of misprints on a page of a book

– The number of people in a community living to 100 years of age

– ...
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 182

Compare Binomial with Poisson

n = 20 p = 0.2 n = 20 p = 0.2

0.25 Binomial 0.25 Poisson


Poisson Binomial

0.2 0.2

0.15 0.15
p(i)

p(i)
0.1 0.1

0.05 0.05

0 0
0 5 10 15 20 0 5 10 15 20
i i

Poisson approximation is good

• n does not have to be too large.


• p does not have to be too small.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 183

Matlab Code

function comp_binom_poisson(n,p);

i=0:n; P1=binopdf(i,n,p); P2=poisspdf(i,n*p);


figure; bar(i,P1,’g’); hold on; grid on;
bar(i,P2,’r’); legend(’Binomial’,’Poisson’);
xlabel(’i’); ylabel(’p(i)’);
axis([-0.5 n 0 max(P1)+0.05]);
title([’n = ’ num2str(n) ’ p = ’ num2str(p)]);

figure; bar(i,P2,’r’); hold on; grid on;


bar(i,P1,’g’); legend(’Poisson’,’Binomial’);
xlabel(’i’); ylabel(’p(i)’);
axis([-0.5 n 0 max(P1)+0.05]);
title([’n = ’ num2str(n) ’ p = ’ num2str(p)]);
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 184

Example Suppose that the number of typographical errors on a single page


of a book has a Poisson distribution with parameter λ = 12 .

Q: Calculate the probability that there is at least one error on this page.

P (X ≥ 1) = 1 − P (X = 0) = 1 − e−1/2 = 0.393.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 185

Expectation and Variance of Poisson

X ∼ P oisson(λ) Y ∼ binomial(n, p)

X approximates Y when (1): n large, (2): p small, and (3): np ≈ λ

E(Y ) = np, V ar(Y ) = np(1 − p)

Guess E(X) = ??, V ar(X) =??


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 186

X ∼ P oisson(λ)

∞ i ∞ i−1 ∞ j
X
−λ λ X
−λ λ X
−λ λ
E(X) = ie =λ e =λ e =λ
i=1
i! i=1
(i − 1)! j=0
j!

∞ i ∞ i−1
2
X
2 −λ λ X
−λ λ
E(X ) = i e =λ ie
i=1
i! i=1
(i − 1)!
∞ j
−λ λ
X
=λ (j + 1)e = λ (E(X) + 1) = λ2 + λ
j=0
j!

V ar(X) = λ2 + λ − λ2 = λ
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 187

Section 4.8.1: Geometric Random Variable

Definition: Suppose that independent trials, each having probability p of


being a success, are performed until a success occurs. X is the number of trials
required. Then

P {X = n} = (1 − p)n−1 p, n = 1, 2, 3, ...

P∞ P∞
Check n=1 P {X = n} = n=1 (1 − p)n−1 p = 1??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 188

Expectation X ∼ Geometric(p).


X
E(X) = n(1 − p)n−1 p (q = 1 − p)
n=1

=p + 2qp + 3q 2 p + 4q 3 p + 5q 4 p + ...
2 3 4

=p 1 + 2q + 3q + 4q + 5q + ... =??

We know
1
1 + q + q 2 + q 3 + q 4 + ... =
1−q
How about

I = 1 + 2q + 3q 2 + 4q 3 + 5q 4 + ... =??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 189

I = 1 + 2q + 3q 2 + 4q 3 + 5q 4 + ... =??

First trick

q × I = q + 2q 2 + 3q 3 + 4q 4 + 5q 5 + ...

2 3 1 4
I − qI = 1 + q + q + q + q + ... =
1−q
1
=⇒I =
(1 − q)2
p 1
=⇒E(X) = pI = =
(1 − q)2 p
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 190

I = 1 + 2q + 3q 2 + 4q 3 + 5q 4 + ... =??

Second trick
d 2 3 4 5

I= q + q + q + q + q + ...
dq

d 1 1
I= =
dq 1 − q (1 − q)2
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 191

Variance X ∼ Geometric(p).


X
E(X 2 ) = n2 (1 − p)n−1 p (q = 1 − p)
n=1
2 2 2 2 3 2 4

=p 1 + 2 q + 3 q + 4 q + 5 q + ...

J = 1 + 22 q + 32 q 2 + 42 q 3 + 52 q 4 + ... =??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 192

J = 1 + 22 q + 32 q 2 + 42 q 3 + 52 q 4 + ... =??

Trick 1

qJ = q + 22 q 2 + 32 q 3 + 42 q 4 + 52 q 5 + ...
J(1 − q) = 1 + (22 − 12 )q + (32 − 22 )q 2 + (42 − 32 )q 3 + ...
= 1 + 3q + 5q 2 + 7q 3 + ...
J(1 − q)q = q + 3q 2 + 5q 3 + 7q 4 + ...
2 1+q
J(1 − q)2 = 1 + 2q + 2q 2 + 2q 3 + ... = −1 + =
1−q 1−q
1+q p(2 − p
2 2−p 1−p
J= E(X ) = = V ar(X) =
(1 − q)3 p3 p2 p2
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 193

J = 1 + 22 q + 32 q 2 + 42 q 3 + 52 q 4 + ... =??

Trick 2
∞ ∞
X
2 n−1
X d n
J= n q = nq
n=1 n=1
dq
"∞ #  
d X
n d q
= nq = E(X)
dq n=1 dq p
q d 1 q 1
= =
p dq 1 − q p (1 − q)2
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 194

Section 4.8.2: Negative Binomial Distribution

Definition: Independent trials, each having probability p, of being a success


are performed until a total of r successes is accumulated. Let X denote the
number of trials required. Then X is a negative binomial random variable

X ∼ N B(r, p)

X ∼ Geometric(p) is the same as X ∼ N B(1, p).

What is probability mass function of X ∼ N B(r, p)?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 195

 
n−1 r
P {X = n} = p (1 − p)n−r , n = r, r + 1, r + 2, ...
r−1

∞  
X n−1 r
E(X k ) = nk p (1 − p)n−r
n=r
r−1

X (n − 1)!
= nk pr (1 − p)n−r
n=r
(r − 1)!(n − r)!

Now what?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 196


X n!
E(X k ) = nk−1 pr (1 − p)n−r
n=r
(r − 1)!(n − r)!

r X
k−1 n!
= n pr+1 (1 − p)n−r , (m = n + 1)
p n=r r!(n − r)!

r X (m − 1)!
= (m − 1)k−1 pr+1 (1 − p)m−1−r
p m=r+1 r!(m − 1 − r)!
r
= E[(Y − 1)k−1 ], Y ∼ N B(r + 1, p)
p

E(X) =??, E(X 2 ) =??, V ar(X) =??


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 197

r
E(X k ) = E[(Y − 1)k−1 ], Y ∼ N B(r + 1, p)
p

Let k = 1. Then
r r
E(X) = E[(Y − 1)0 ] =
p p

Let k = 2. Then
 
r2 r r+1
E(X ) = E[(Y − 1)] = −1
p p p

r(1 − p)
2 2
V ar(X) = E(X ) − E X =
p2
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 198

Example (4.8.8a) An urn contains N white and M black balls. Balls are
randomly selected, one at a time, until a black one is obtained. Each selected ball
is replaced before the next one is drawn.

Q(a): What is the probability that exactly n draws are needed?

Q(b): What is the expected number of draws needed?

Q(c): What is the probability that at least k draws are needed?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 199

M
Solution: This is a geometric random variable with p= M +N .

Q(a): What is the probability that exactly n draws are needed?

n−1
M N n−1

n−1 N M
P (X = n) = (1 − p) p= =
M +N M +N (M + N )n

Q(b): What is the expected number of draws needed?

1 M +N N
E(X) = = =1+
p M M

Q(c): What is the probability that at least k draws are needed?


∞ k−1
 k−1
X p(1 − p) N
P (X ≥ k) = (1 − p)n−1 p = =
1 − (1 − p) M +N
n=k
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 200

Example (4.8.8e): A pipe-smoking mathematician carries 2 matchboxes, 1 in


the left-hand pocket and 1 in his right-hand pocket. Each time he needs a match
he is equal likely to take it from either pocket. Consider the moment when the
mathematician first discovers that one of his matchboxes is empty. If it is assumed
that both matchboxes initially contained N matches, what is the probability that
there are exactly k matches in the other box, k = 0, 1, ..., N ?
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 201

Solution:

1. Assume the left-hand pocket is empty.

2. X = total matches taken at the moment when left-hand pocket is empty.

3. View X as a random variable X ∼ N B(r, p). What is r , p?

4. Remember there are k matches remaining in the right-hand pocket.

5. The desired probability = P {X = n}. What is n?

6. Adjustment by considering either left-hand or right-hand can be empty.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 202

Solution:

1. Assume the left-hand pocket is empty.

2. X = total matches taken at the moment when left-hand pocket is empty.


3. View X as a random variable X ∼ N B(r, p). What is r , p?

p = 21 , r = N + 1.
4. Remember there are k matches remaining in the right-hand pocket.

5. The desired probability = P {X = n}. What is n?

n = 2N − k + 1.
6. Adjustment by considering either left-hand or right-hand can be empty.

2N −k 1 2N −k+1
 
2 N 2 .
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 203

Section 4.8.3: Hypergeometric Distribution

Definition A sample of size n is chosen randomly without replacement from


an urn containing N balls, of which m are white and N− m are black. Let X
denote the number of white balls selected, then X ∼ HG(N, m, n)

m N −m
 
i n−i
P {X = i} = N
 , i = 0, 1, ..., n.
n

n − (N − m) ≤ i ≤ min(n, m)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 204

m

If the n balls are chosen with replacement, then it is binomial n, p = N .

when m and N are large relative to n.


m

X ∼ HG(N, m, n) ≈ binomial n,
N

??nm
E(X) ≈ np = ??
N
?? nm  m
V ar(X) ≈ np(1 − p) = 1− ??
N N
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 205

n m N −m
 
X
k i n−i
E(X k ) = i N

i=1 n
n
X m! (N − m)! n!(N − n)!
= ik−1
i=1
(i − 1)!(m − i)! (n − i)!(N − m − n − i)! N!
n−1
X m! (N − m)! n!(N − n)!
= (j + 1)k−1
j=0
j!(m − 1 − j)! (n − 1 − j)!(N − m − n − 1 − j)! N!
nm  k−1

= E (Y + 1) , Y ∼ HG(N − 1, m − 1, n − 1)
N
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 206

Section 4.9: Properties of Cumulative Distribution Function

F (x) = P {X ≤ x}

1. F (x) is a non-decreasing function: If a < b, then F (a) ≤ F (b)


2. limx→∞ = 1
3. limx→−∞ = 0
4. F (x) is right continuous.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 207

Chapter 5: Continuous Random Variables

Definition: X is a continuous random variable if there exists a non-negative


function f , defined for all real x ∈ (−∞, ∞), having the property that for any
set B of real numbers,
Z
P {X ∈ B} = f (x)dx
B

The function f is called the probability density function of the random variable X .
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 208

Some Facts
R∞
• P {X ∈ (−∞, ∞)} = −∞
f (x)dx = 1

Rb
• P {X ∈ (a, b)} = P {a ≤ X ≤ b} = a
f (x)dx

Ra
• P {X = a} = a
f (x)dx = 0

Ra
• F (a) = P {X ≤ a} = P {X < a} = −∞
f (x)dx

• F (∞) = 1, F (−∞) = 0

d
• f (x) = dx F (x)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 209

Example 5.1.1a: Suppose that X is a continuous random variable whose


probability density function is given by

 C(4x − 2x2 ) 0 < x < 2
f (x) =
 0 otherwise

Q(a): What is the value of C ?

Q(b): Find P {X > 1}.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 210

Example 5.1.1d: Denote FX and fX the cumulative function and the


density function of a continuous random variable X , respectively. Find the density
function of Y = 2X .
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 211

Example 5.1.1d: Denote FX and fX the cumulative function and the


density function of a continuous random variable X , respectively. Find the density
function of Y = 2X .

Solution:

FY (y) = P {Y ≤ y} = P {2X ≤ y} = P {X ≤ y/2} = FX (y/2)

d d 1
fY (x) = FY (y) = FX (y/2) = fX (y/2)
dy dy 2
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 212

Derivatives of Common Functions

http://en.wikipedia.org/wiki/Derivative

• Exponential and logarithmic functions


d x x d x x
dx e = e , dx a = a log a, (Notation: log x = loge x)
d 1 d 1
dx log x = x , dx log a x = x log a

• Trigonometric functions
d d d
dx sin x = cos x, dx cos x = − sin x, dx tan x = sec2 x
• Inverse trigonometric functions
d √ 1 d √ 1
dx arcsin x = 1−x2
, dx arccos x = − 1−x2
,
d 1
dx arctan x = 1+x2 ,
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 213

Integration by Parts

Z b Z b
b
f (x)dx = f (x)x|a − xf ′ (x)dx
a a

Z b Z b
f (x)d[g(x)] = f (x)g(x)|ba − g(x)f ′ (x)dx
a a

Z b Z b Z b
b
f (x)h(x)dx = f (x)d[H(x)] = f (x)H(x)|a − H(x)f ′ (x)dx
a a a

where h(x) = H ′ (x).


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 214

Examples:

Z b Z b

log xdx =x log x|ba − x [log x] dx
a a
b
1
Z
=b log b − a log a − x dx
a x
=b log b − a log a − b + a.

b b b
1 1 1
Z Z Z

x log xdx = log xd[x ] = x2 log x|ba −2
x2 [log x] dx
a 2 a 2 2 a
1 2 1 2 1 b
Z
= b log b − a log a − xdx
2 2 2 a
1 2 1 2 1 2 2

= b log b − a log a − b −a
2 2 4
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 215

Z b Z b Z b
xn ex dx = xn d[ex ] = xn ex |ba −n ex xn−1 dx
a a a

Z b Z b Z b
b
xn cos xdx = xn d[sin x] = xn sin x|a −n sin x xn−1 dx
a a a

Z b Z b Z b
n n
sin x sin 2xdx =2 sin x sin x cos xdx = 2 sinn+1 xd[sin x]
a a a
b
2 n+2

= sin x
n+2 a
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 216

Expectation

Definition: Given a continuous random variable X with density f (x), its


density is defined as
Z ∞
E(X) = xf (x)dx
−∞
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 217

Example: The density of X is given by



 2x if 0 ≤ x ≤ 1
f (x) =
 0 otherwise

 X
Find E e .

Two solutions:

1. • Find the density function of Y = eX , denoted by fY (y).


R∞
• E(Y ) = −∞ yfY (y)dy .

2. Apply the formula


Z ∞
E [g(X)] = g(x)f (x)dx
−∞
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 218

Solution 1: Y = eX .
 X
FY (y) =P {Y ≤ y} = P e ≤ y
Z log y
=P {X ≤ log y} = 2ydy = log2 y
0

Therefore,

2
fY (y) = log y, 1≤y≤e
y
Z ∞ Z e
E(Y ) = yfY (y)dy = 2 log ydy
−∞ 1
e
1
Z
e
= 2y log y|1 − 2y dy = 2
1 y
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 219

Solution 2:
Z 1 Z 1
X
2xex dx = 2xdex
 
E e =
0 0
Z 1
1
= 2xex |0 −2 ex dx
0
=2e − 2(e − 1) = 2
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 220

Lemma 5.2.2.1 For a non-negative random variable X ,


Z ∞
E(X) = P {X > x}dx
0
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 221

Lemma 5.2.2.1 For a non-negative random variable X ,


Z ∞
E(X) = P {X > x}dx
0

Proof:
Z ∞ Z ∞ Z ∞
P {X > x}dx = fX (y)dydx
0 0 x
Z ∞Z y
= fX (y)dxdy
0 0
Z ∞ Z y 
= fX (y) dx dy
0 0
Z ∞
= fX (y)ydy = E(X)
0
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 222

X is a continuous random variable.


• E(aX + b) = aE(X) + b, where a and b are constants.

• V ar(X) = E (X − E(X)) = E(X 2 ) − E 2 (X).


2
 

• V ar(aX + b) = a2 V ar(X)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 223

Section 5.3: The Uniform Random Variable

Definition: X is a uniform random variable on the interval (a, b) if its


probability density function is given by

1

b−a if a<x<b
f (x) =
 0 otherwise

Denote X ∼ U nif orm(a, b), or simply X ∼ U (a, b).


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 224

Cumulative function

 0

 if x≤a
F (x) = x−a
b−a if a<x<b


1 x≥b

if

If X ∼ U (0, 1), then F (x) = x, if 0 ≤ x ≤ 1.

Expectation

a+b
E(X) =
2
Variance

(b − a)2
V ar(X) =
12
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 225

Section 5.4: Normal Random Variable

2

Definition: X is a normal random variable, X ∼ N µ, σ if its density is
given by

1 − (x−µ)
2

f (x) = √ e 2σ2 , −∞ < x < ∞


2πσ
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 226

• If X ∼ N (µ, σ 2 ), then E(X) = µ and V ar(X) = σ 2 .

• The density function is a “bell-shaped” curve.

• It was first introduced by French Mathematician Abraham DeMoivre in 1733,


to approximate the binomial when n was large.

• The distribution became truly famous because of Karl F. Gauss


and hence its another common name is “Gaussian distribution.”

• It is called “normal distribution” (probably started by Karl Pearson) because


“most people” believed that it was “normal” for any well-behaved data set to
follow this distribution.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 227

Check that
∞ ∞
1
Z Z
(x−µ)2
− 2σ2
f (x)dx = √ e dx = 1.
−∞ −∞ 2πσ

x−µ
Note that, by letting z= σ ,

∞ ∞
1 1 − z2
Z 2
Z
− (x−µ)
√ e 2σ2 dx = √ e 2 dz = I.
−∞ 2πσ −∞ 2π

It suffices to show that I 2 = 1.


∞ ∞
1 1 − w2
Z 2
Z
2 − z2
I = √ e dz × √ e 2 dw
−∞ 2π −∞ 2π
Z ∞Z ∞
1 − z2 +w2
= e 2 dzdw
−∞ −∞ 2π
Z ∞ Z 2π Z 2π Z ∞  2
1 r 2 1 r 2 r
= e− 2 rdθdr = dθ e− 2 d = 1.
2π 0 0 2π 0 0 2
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 228

Expectation and Variance

If X ∼ N (µ, σ 2 ), then E(X) = µ and V ar(X) = σ 2 .


1
Z 2
− (x−µ)
E(X) = x√ e 2σ dx 2

−∞ 2πσ
Z ∞ Z ∞
1 (x−µ)2
− 2σ2 1 − (x−µ)
2

= (x − µ) √ e dx + µ√ e 2σ2 dx
−∞ 2πσ −∞ 2πσ
Z ∞
1 − (x−µ)
2

= (x − µ) √ e 2σ2 d[x − µ] + µ
−∞ 2πσ
Z ∞
1 − 2σz2
= z√ e 2 dz + µ

−∞ 2πσ
=µ.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 229

2 2
 
V ar(X ) =E (X − µ)
Z ∞
2 1 (x−µ)2
− 2σ2
= (x − µ) √ e d[x − µ]
−∞ 2πσ
Z ∞
2 1 − 2σz2
= z √ e 2
dz
−∞ 2πσ
Z ∞ 2
2 z 1 − z22 h z i
=σ 2
√ e 2σ d
−∞ σ 2π σ
Z ∞
2 1
2
2 − t2
=σ t √ e d [t]
−∞ 2π
Z ∞  2
2 1 t2
− 2 t
=σ t√ e d
−∞ 2π 2
Z ∞
2 1 h t2 i
=σ −t √ d e− 2
−∞ 2π
Z ∞ ∞
1 t2 1 t2
=σ 2 √ e− 2 dt − √ σ 2 te− 2 = σ2
−∞ 2π 2π −∞
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 230

2
− t2
Why limt→∞ te = 0??

2
− t2 t [t]′ 1
lim te = lim = lim  2 ′ = lim t2 /2 = 0
t→∞ t→∞ et2 /2 t→∞ et /2 t→∞ te
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 231

Normal Distribution Approximates Binomial

Suppose X ∼ Binomial(n, p). For fixed p, as n → ∞

Binomial(n, p) ≈ N (µ, σ 2 ),

µ = np, σ 2 = np(1 − p).


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 232

n = 10 p = 0.2
0.35

0.3

0.25
Density (mass) function

0.2

0.15

0.1

0.05

0
0 1 2 3 4 5 6 7 8 9 10
x
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 233

n = 20 p = 0.2
0.25

0.2
Density (mass) function

0.15

0.1

0.05

0
−10 −5 0 5 10 15 20 25
x
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 234

n = 50 p = 0.2
0.16

0.14

0.12
Density (mass) function

0.1

0.08

0.06

0.04

0.02

0
−10 −5 0 5 10 15 20 25 30
x
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 235

n = 100 p = 0.2
0.1

0.09

0.08

0.07
Density (mass) function

0.06

0.05

0.04

0.03

0.02

0.01

0
−10 0 10 20 30 40 50
x
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 236

n = 1000 p = 0.2
0.035

0.03

0.025
Density (mass) function

0.02

0.015

0.01

0.005

0
100 150 200 250 300
x
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 237

Matlab code

function NormalApprBinomial(n,p);

mu = n*p; sigma2 = n*p*(1-p);

figure;
bar((0:n),binopdf(0:n,n,p),’g’);hold on; grid on;
x = mu - 3*sigma2:0.001:mu+3*sigma2;
plot(x,normpdf(x,mu,sqrt(sigma2)),’r-’,’linewidth’,2);
xlabel(’x’);ylabel(’Density (mass) function’);
title([’n = ’ num2str(n) ’ p = ’ num2str(p)]);
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 238

The Standard Normal Distribution

Z ∼ N (0, 1) is called the standard normal.


• E(Z) = 0, V ar(Z) = 1.

2 x−µ

• If X ∼ N µ, σ , then Y = σ ∼ N (0, 1).

• Z ∼ N (0, 1). Denote density function fZ (z) by φ(z)


and cumulative function FZ (z) by Φ(z).
Z z
1 −z 2 1 − z2
φ(z) = √ e 2 , Φ(z) = √ e 2 dz
2π −∞ 2π

• φ(−x) = φ(x), Φ(−x) = 1 − Φ(x). Numerical table on Page 222.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 239

Example 5.4.4b: X ∼ N (µ = 3, σ 2 = 9). Find


• P {2 < X < 5}
• P {X > 0}
• P {|X − 3| > 6}.

Solution:
X−µ X−3
Let Y = σ = 3 . Then Y ∼ N (0, 1).
 
2−3 X −3 5−3
P {2 < X < 5} =P < <
3 3 3
     
1 2 2 1
=P − < Y < =Φ −Φ −
3 3 3 3
=Φ (0.6667) − 1 + Φ (0.3333)
≈0.7475 − 1 + 0.6306 ≈ 0.3781
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 240

Section 5.5: Exponential Random Variable

Definition: An exponential random variable X has the probability density


function given by

 λe−λx if x≥0
f (x) = (λ > 0)
 0 if x<0

X ∼ EXP (λ).

Cumulative function:
Z x
F (x) = λe−λx dx = 1 − e−λx , (x ≥ 0).
0

F (∞) =??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 241

Moments:
Z ∞
E(X n ) = xn λe−λx dx
0
Z ∞
=− xn de−λx
0
Z ∞
= e−λx nxn−1 dx
0
n
= E(X n−1 )
λ

E(X) = λ1 , E(X 2 ) = 2
λ2 , V ar(X) = 1
λ2 , E(X n ) = n!
λn ,
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 242

Applications of Exponential Distribution

• The exponential distribution often arises as being the distribution of the


amount of time until some specific event occurs.

• For instance, the amount of time (starting from now) until an earthquake
occurs, or until a new war breaks out, etc.

• The exponential distribution is memoryless.

P {X > s + t|X > t} = P {X > s}, for all s, t ≥ 0


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 243

Example 5b: Suppose that the length of a phone call in minutes is an


1
exponential random variable with parameter λ = 10 . If someone arrives
immediately ahead of you at a public telephone booth, find the probability that you
will have to wait

• (a) more than 10 minutes;


• (b) between 10 and 20 minutes.

Solution:

• P {X > 10} = e−λ10 = e−1 = 0.368.


• P {10 < X < 20} = e−1 − e−2 = 0.233
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 244

Section 5.6.1: The Gamma Distribution

Definition: A random variable has a gamma distribution with parameters


(α, λ), where α, λ > 0, if its density function is given by
 −λx
 λe (λx)α−1 , x ≥ 0
Γ(α)
f (x) =
 0 x<0

Denote X ∼ Gamma(α, λ).

R∞
0
f (x)dx = 1??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 245

The Gamma function is defined as


Z ∞
Γ(α) = e−y y α−1 dy = (α − 1)Γ(α − 1).
0


Γ(1) = 1. Γ(0.5) = π. Γ(n) = (n − 1)!.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 246

Connection to exponential distribution

• When α = 1, the gamma distribution reduces to an exponential distribution.


Gamma(α = 1, λ) = EXP (λ).

• A gamma distribution Gamma(n, λ) often arises as the distribution of the


amount of time one has to wait until a total of n events has occurred.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 247

X ∼ Gamma(α, λ).
Expectation and Variance:
α α
E(X) = λ, V ar(X) = λ2 .


λe−λx (λx)α−1
Z
E(X) = x dx
0 Γ(α)

e−λx (λx)α
Z
= dx
0 Γ(α)
Γ(α + 1) 1 ∞ e−λx (λx)α
Z
= λ dx
Γ(α) λ 0 Γ(α + 1)
Γ(α + 1) 1
=
Γ(α) λ
α
= .
λ
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 248

Section 5.6.4: The Beta Distribution

Definition: A random variable X has a beta distribution if its density is given


by

1 a−1

B(a,b) x (1 − x)b−1 0<x<1
f (x) =
 0 otherwise

Denote X ∼ Beta(a, b).

The Beta function


1
Γ(a)Γ(b)
Z
B(a, b) = = xa−1 (1 − x)b−1 dx
Γ(a + b) 0
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 249

X ∼ Beta(a, b).
Expectation and Variance:

a
E(X) = ,
a+b

ab
V ar(X) =
(a + b)2 (a + b + 1)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 250

Sec. 5.7: Distribution of a Function of a Random Variable

Given the density function fX (x) and cumulative function FX (x) for a random
variable X , what is the general formula for the density function fY (y) of the
random variable Y = g(X)?

Example 5.7.7a: X ∼ U nif orm(0, 1). Y = g(X) = X n .


FY (y) = P {Y ≤ y} = P {X n ≤ y} = P {X ≤ y 1/n } = FX (y 1/n ) = y 1/n

1
fY (y) = y 1/n−1 , 0≤y≤1
n
−1
 d −1
=fX g (y) g (y)

dy

y = g(x) = xn =⇒ g −1 (y) = y 1/n .


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 251

Theorem 7.1: Let X be a continuous random variable having probability


density function fX . Suppose g(x) is a strictly monotone (increasing or
decreasing), differentiable function of x. Then the random variable Y = g(X)
has a probability density function given by


 fX g −1 (y) d g −1 (y)

if y = g(x) for some x
dy
fY (y) =
 0 if y 6= g(x) for all x
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 252

Example 7b: Y = X2
 2
FY (y) = P {Y ≤ y} = P X ≤ y
√ √
=P {− y ≤ X ≤ y}
√ √
=FX ( y) − FX (− y)

1 √ √
fY (y) = √ (fX ( y) + fX (− y))
2 y

Does this example violate Theorem 7.1?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 253

Example (Cauchy distribution): Suppose a random variable X has density


function
1
f (x) = , −∞ < x < ∞
π(1 + x2 )

1
Q: Find the distribution of X .

R∞
One can check −∞ f (x) = 1, but E(X) = ∞.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 254

Solution: Denote the cumulative function of X by FX (x).

Let Z = 1/X . Assume z > 0,

FZ (z) =P {1/X ≤ z} = P {X ≤ 0} + P {X ≥ 1/z, X ≥ 0}


Z ∞
1 1
= + 2
dx
2 1/z π(1 + x )

1 1 1 1 π 
= + tan−1 (z) − tan−1 (1/z)

= +
2 π 2 π 2 1/z

Therefore, for z > 0,


1 hπ i′ 1
fZ (z) = − tan−1 (1/z) = 2
.
π 2 π(1 + z )
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 255

Assume z < 0,

FZ (z) =P {1/X ≤ z} = P {X ≥ 1/z, X ≤ 0}


Z 0
1
= 2
dx
1/z π(1 + x )
0
1 −1
1 −1

= tan (z) = 0 − tan (1/z)
π 1/z π

Therefore, for z < 0,


1 −1
′ 1
fZ (z) = 0 − tan (1/z) = 2
.
π π(1 + z )

Thus, Z and X have the same density function.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 256

Chapter 6: Jointly Distributed Random Variables

Definition: For any two random variables X and Y , the joint cumulative
probability distribution is defined as

F (x, y) = P {X ≤ x, Y ≤ y} , −∞ < x, y < ∞


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 257

F (x, y) = P {X ≤ x, Y ≤ y} , −∞ < x, y < ∞


The distribution of X is

FX (x) =P {X ≤ x}
=P {X ≤ x, Y ≤ ∞}
=F (x, ∞)

The distribution of Y is

FY (y) =P {Y ≤ y} = F (∞, y)

Example:

P {X > x, Y > y} = 1 − FX (x) − FY (y) + F (x, y)


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 258

Jointly Discrete Random Variables

X and Y are both discrete random variables.

Joint probability mass function


XX
p(x, y) = P {X = x, Y = y}, p(x, y) = 1
x y

Marginal probability mass function


X
pX (x) = P {X = x} = p(x, y)
y:p(x,y)>0

X
pY (y) = P {Y = y} = p(x, y)
x:p(x,y)>0
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 259

Example 1a: Suppose that 3 balls are randomly selected from an urn
containing 3 red, 4 white, and 5 blue balls. Let X and Y denote, respectively, the
number of red and white balls chosen.

Q: The joint probability mass function p(x, y) = P {X = x, Y = y}=??


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 260

5

3 10
p(0, 0) = 12 = 220

3
4 5
 
1 2 40
p(0, 1) = 12
 =
3
220
4 5
 
2 1 30
p(0, 2) = 12
 =
3
220
4

3 4
p(0, 3) = 12 = 220

3
84
pX (0) = p(0, 0) + p(0, 1) + p(0, 2) + p(0, 3) =
220
9

3 84
pX (0) = 12 =
3
220
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 261

Jointly Continuous Random Variables

X and Y are jointly continuous if there exists a function f (x, y) defined for all
real x and y , having the property that for every set C of pairs of real numbers

Z Z
P {(X, Y ) ∈ C} = f (x, y)dxdy
(x,y)∈C

The function f (x, y) is called the joint probability density function.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 262

The joint cumulative function

F (x, y) = P {X ∈ (−∞, x], Y ∈ (−∞, y]}


Z y Z x
= f (x, y)dxdy
−∞ −∞

The joint probability density function

∂2
f (x, y) = F (x, y)
∂x∂y

The marginal probability density functions


Z ∞ Z ∞
fX (x) = f (x, y)dy, fY (y) = f (x, y)dx
−∞ −∞
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 263

Example 1c: The joint density function is given by



 Ce−x e−2y 0 < x < ∞, 0 < y < ∞
f (x, y) =
 0 otherwise

• (a) Determine C .
• (b) P {X > 1, Y < 1},
• (c) P {X < Y },
• (d) P {X < a}.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 264

Solution (a):
Z ∞ Z ∞
1= Ce−x e−2y dxdy
0 0
Z ∞ Z ∞
=C e−x dx e−2y dy
0 0
1 −2y 0 C
=C × e−x |0∞ × e |∞ =
2 2

Therefore, C = 2.

Solution (b):
Z 1 Z ∞
P {X > 1, Y < 1} = 2e−x e−2y dxdy
0 1
=e−x |1∞ e−2y |01 = e−1 (1 − e−2 )
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 265

Solution (c):
Z Z
P {X < Y } = 2e−x e−2y dxdy =
(x,y):x<y
Z ∞Z y
= 2e−x e−2y dxdy
0 0
Z ∞ Z y 
= e−x dx 2e−2y dy
0 0
Z ∞
−y
  −2y
= 1−e 2e dy
0
Z ∞
= 2e−2y − 2e−3y dy
0
2 1
=1 − =
3 3
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 266

Solution (d):

P {X < a} =P {X < a, Y < ∞}


Z aZ ∞
= 2e−2y e−x dydx
0 0
Z a
= e−x dx
0
=1 − e−a
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 267

Example 1e: The joint density is given by



 e−(x+y) 0 < x < ∞, 0 < y < ∞
f (x, y) =
 0 otherwise

Q: Find the density function of the random variable X/Y .


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 268

Solution: Let Z = X/Y , 0 < Z < ∞.

FZ (z) =P {Z < z} = P {X/Y ≤ z}


Z Z
= e−(x+y) dxdy
x/y≤z
Z ∞ Z yz 
= e−(x+y) dx dy
0 0
Z ∞
−y
e−x |0yz
 
= e dy
0
Z ∞
= e−y − e−y−yz dy
0
−y−yz ∞

e =1− 1
=e−y |0∞ +
z + 1 0 z+1
h i′
1 1
Therefore, fZ (z) = 1 − z+1 = (z+1)2 .
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 269

FZ (z) =P {Z < z} = P {X/Y ≤ z}


Z Z
= e−(x+y) dxdy
x/y≤z
"Z #
Z ∞ ∞
= e−(x+y) dy dx
x
0 z
Z ∞ h x
i
= e−x e−y |∞
z
dy
0
Z ∞
x
= e−x− z dy
0
−x− x
0
e z 1 1
= 1
= 1 =1−
1+ z


1+ z
z+1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 270

Multiple Jointly Distributed Random Variables

Joint cumulative distribution function

F (x1 , x2 , x3 , ..., xn ) = P {X1 ≤ x1 , X2 ≤ x2 , ..., Xn ≤ xn }

Suppose X1 , X2 , ..., Xn are discrete random variables.


Joint probability mass function

p(x1 , x2 , ..., xn ) = P {X1 = x1 , X2 = x2 , ..., Xn = xn }


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 271

X1 , X2 , ..., Xn are jointly continuous if there exists a function f (x1 , x2 , ..., fn )


such that for any set C in n-dimensional space
Z Z Z
P {(X1 , X2 , ..., Xn ) ∈ C} = ... f (x1 , x2 , ..., xn )dx1 dx2 ...dxn
(x1 ,x2 ,...,xn )∈C

f (x1 , x2 , ..., xn ) is the joint probability density function.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 272

Section 6.2: Independent Random Variables

Definition: The random variable X and Y are said to be independent if for


any two sets of real numbers A and B

P {X ∈ A, Y ∈ B} = P {X ∈ A} × P {Y ∈ B}

equivalently

P {X ≤ x, Y ≤ y} = P {X ≤ x} × P {Y ≤ y}

or, equivalently,

 p(x, y) = pX (x)pY (y) for all x, y when X and Y are discrete



f (x, y) = fX (x)fY (y) for all x, y when X and Y are continous

Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 273

Example 2b: Suppose the number of people that enter a post office on a
given day is a Poisson random variable with parameter λ.

Each person that enters the post office is male with probability p and a female
with probability 1 − p.

Let X and Y , respectively, be the number of males and females that enter the
post office.

• (a): Find the joint probability mass function P {X = i, Y = j}.


• (b): Find the marginal probability mass functions P {X = i}, P {Y = j}.
• (c): Show that X and Y are independent.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 274

Key: Suppose we know there are N = n people that enter a post office. Then
the number of males is a binomial with parameters (n, p).

[X|N = n] ∼ Binomial (n, p)

[N = X + Y ] ∼ P oi(λ).

Therefore,

X
P {X = i, Y =} = P {X = i, Y = j|N = n}P {N = n}
n=0

=P {X = i, Y = j|N = i + j}P {N = i + j}

because P {X = i, Y = j|N 6= i + j} = 0.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 275

Solution (a) Conditional on X + Y = i + j:

P {X = i, Y = j}
=P {X = i, Y = j|X + Y = i + j}P {X + Y = i + j}
i+j
   
i+j i λ
= p (1 − p)j e−λ
i (i + j)!
pi (1 − p)j e−λ λi λj
=
i!j!
pi (1 − p)j e−λ(p+(1−p)) λi λj
=
i!j!
i j
  
(λp) (λ(1 − p))
= e−λp e−λ(1−p)
i! j!
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 276

Solution (b)


X
P {X = i} = P {X = i, Y = j}
j=0
∞  i
 j

X (λp) (λ(1 − p))
= e−λp e−λ(1−p)
j=0
i! j!
  ∞ 
i X j

(λp) (λ(1 − p))
= e−λp e−λ(1−p)
i! j=0
j!
i
 
−λp (λp)
= e
i!
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 277

Example 2c: A man and a woman decide to meet at a certain location. Each
person independently arrives at a time uniformly distributed between 12 noon and
1 pm.

Q: Find the probability that the first to arrive has to wait longer than 10 minutes.

Solution:
Let X = number of minutes by which one person arrives later than 12 noon.
Let Y = number of minutes by which the other person arrives later than 12 noon.
X and Y are independent.

The desired probability = P {|X − Y | ≥ 10}.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 278

Z Z
P {|X − Y | ≥ 10} = f (x, y)dxdy
|x−y|>10
Z Z
= fX (x)fY (y)dxdy
|x−y|>10

1 1
Z Z
=2 dxdy
60 60
y<x−10
60 x−10
1
Z Z
=2 dydx
10 0 3600
25
= .
36
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 279

A Necessary and Suffice Condition for Independence

Proposition 2.1 The continuous (discrete) random variables X and Y are


independent if and only if their joint probability density (mass) function can be
expressed as

fX,Y (x, y) = h(x)g(y), −∞ < x, y < ∞

Proof: By definition, X and Y are independent if fX,Y (x, y) = fX (x)fY (y).


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 280

Example 2f:
(a): Two random variables X and Y with joint density given by

f (x, y) = 6e−2x e−3y , 0 < x, y < ∞

are independent.

(b): Two random variables X and Y with joint density given by

f (x, y) = 24xy 0 < x, y < 1, 0 < x + y < 1

are NOT independent. (Why?)


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 281

Independence of More Than Two Random Variables

Definition: The n random variables, X1 , X2 , ..., Xn , are said to be


independent if, for all sets of real numbers, A1 , A2 , ..., An
n
Y
P {X1 ∈ A1 , X2 ∈ A2 , ..., Xn ∈ An } = P {Xi ∈ Ai }
i=1

equivalently,
n
Y
P {X1 ≤ x1 , X2 ≤ x2 , ..., Xn ≤ xn } = P {Xi ≤ xi },
i=1

for all x1 , x2 , ..., xn .


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 282

Example 2h: Let X , Y , Z be independent and uniformly distributed over


(0, 1). Compute P {X ≥ Y Z}.

Solution:
Z Z Z
P {X ≥ Y Z} = f (x, y, z)dxdydz
x≥yz
Z 1 Z 1 Z 1
= dxdydz
0 0 yz
Z 1 Z 1
= (1 − yz) dydz
0 0
1
z
Z 
= 1− dz
0 2
3
=
4
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 283

Chapter 6.3: Sum of Independent Random Variables

Suppose that X and Y are independent. Let Z


=X +Y.
Z Z
FZ (z) =P {X + Y ≤ z} = fX (x)fY (y)dxdy
x+y≤z
Z ∞ Z z−y
= fX (x)fY (y)dxdy
−∞ −∞
Z ∞ Z z−y
= fY (y) fX (x)dxdy
−∞ −∞
Z ∞
= fY (y)FX (z − y)dy
−∞



Z
fZ (z) = FZ (z) = fY (y)fX (z − y)dy
∂z −∞
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 284

The discrete case: X and Y are two independent discrete random variables,
then Z = X + Y has probability mass function:

X
pZ (z) = pY (j)pX (z − j)
j

——————–

Example: Suppose X, Y ∼ Bernouli(p). Then

Z = X + Y ∼ Binomial(2, p)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 285

∼ Bernouli(p) are independent. Then


Suppose X, Y
Z = X + Y ∼ Binomial(2, p).

 
X 2
pZ (0) = pY (j)pX (0 − j) =pY (0)pX (0) = (1 − p)2 = (1 − p)2
j
0

X
pZ (1) = pY (j)pX (1 − j) =pY (0)pX (1 − 0) + pY (1)pX (1 − 1)
j
 
2
=(1 − p)p + p(1 − p) = p(1 − p)
1

 
X 2 2
pZ (2) = pY (j)pX (2 − j) =pY (1)pX (2 − 1) = p2 = p
j
2
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 286

Suppose X ∼ Binomial(n, p) and Y ∼ Binomial(m, p) are independent.


Then Z = X + Y ∼ Binomial(m + n, p).

X
pZ (z) = pY (j)pX (z − j).
j

Note 0 ≤ j ≤ m and z − n ≤ j ≤ z . We can assume m ≤ n.

There are three cases to consider:

1. Suppose 0 ≤ z ≤ m, then 0 ≤ j ≤ z

2. Suppose m < z ≤ n, then 0 ≤ j ≤ m.

3. Suppose n < z ≤ m + n, then z − n ≤ j ≤ m.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 287

Suppose 0 ≤ z ≤ m, then 0 ≤ j ≤ z
X z
X
pZ (z) = pY (j)pX (z − j) = pY (j)pX (z − j)
j j=0
z    
X m n
= pj (1 − p)m−jpz−j (1 − p)n−z+j
j=0
j z−j
z   
X m n
=pz (1 − p)m+n−z
j=0
j z−j
 
m+n z
= p (1 − p)m+n−z
z
because we have seen (e.g., using a combinatorial argument)
z     
X m n m+n
= .
j=0
j z − j z
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 288

Sum of Two Independent Uniform Random Variables

X ∼ U (0, 1) and Y ∼ U (0, 1) are independent. Let Z = X + Y .

Z ∞ Z 1
fZ (z) = fY (y)fX (z − y)dy = fX (z − y)dy
−∞ 0

Must have 0 ≤ y ≤ 1 and 0 ≤ z − y ≤ 1, i.e., z − 1 ≤ y ≤ z .

Two cases to consider

1. If 0 ≤ z ≤ 1, then 0 ≤ y ≤ z .

2. If 1 < z ≤ 2, then z − 1 < y ≤ 1.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 289

If 0 ≤ z ≤ 1, then 0 ≤ y ≤ z .
Z 1 Z z
fZ (z) = fX (z − y)dy = dy = z.
0 0

If 1 < z ≤ 2, then z − 1 ≤ y ≤ 1.
Z 1 Z 1
fZ (z) = fX (z − y)dy = dy = 2 − z.
0 z−1


 z

 0<z≤1
fZ (z) = 2−z 1<z<2


0

otherwise
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 290

Sum of Two Gamma Random Variables

X ∼ Gamma(t, λ) and Y ∼ Gamma(s, λ) are independent.


Let Z =X +Y.

Show that Z ∼ Gamma(s + t, λ).

For an intuition, see notes page 246.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 291

Z ∞ Z z
fZ (z) = fY (y)fX (z − y)dy = fY (y)fX (z − y)dy
−∞ 0

z
λe−λy (λy)s−1 λe−λ(z−y) (λ(z − y))t−1
Z   
= dy
0 Γ(s) Γ(t)

z
e−λz λ2 λs+t−2
Z
 −λy s−1   λy t−1

= e y e (z − y) dy
Γ(s)Γ(t) 0

z
λe−λz λs+t−1
Z
= y s−1 (z − y)t−1 dy
Γ(s)Γ(t) 0
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 292

λe−λz λs+t−1 z s−1


Z
fZ (z) = y (z − y)t−1 dy
Γ(s)Γ(t) 0
 −λz Z z
(λz)s+t−1
 
λe Γ(s + t) s−1 t−1
= y (z − y) dy
Γ(s + t) z s+t−1 Γ(s)Γ(t) 0
 −λz s+t−1
 Z z  s−1  t−1

λe (λz) Γ(s + t) y y  y
= 1− d
Γ(s + t) Γ(s)Γ(t) 0 z z z
Γ(s + t) 1 s−1
 −λz
(λz)s+t−1
 
λe
Z
t−1
= θ (1 − θ) dθ
Γ(s + t) Γ(s)Γ(t) 0
 Z 1 
Γ(s + t)
=fW (z) θ s−1 (1 − θ)t−1 dθ
Γ(s)Γ(t) 0

W ∼ Gamma(s + t, λ).
Because fZ (z) is also a density function, we must have
 1 
Γ(s + t)
Z
θ s−1 (1 − θ)t−1 dθ = 1
Γ(s)Γ(t) 0
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 293

Recall the definition of Beta function in Section 5.6.4


Z 1
t−1
B(s, t) = θ s−1 (1 − θ) dθ
0

and its property (Eq. (6.3) in Section 5.6.4)

Γ(s)Γ(t)
B(s, t) = .
Γ(s + t)
Thus,
 1   
Γ(s + t) Γ(s + t) Γ(s)Γ(t)
Z
θ s−1 (1 − θ)t−1 dθ = =1
Γ(s)Γ(t) 0 Γ(s)Γ(t) Γ(s + t)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 294

Generalization to more than two gamma random variables

If X1 , X2 , ..., Xn are independent gamma random variables with respective


parameters (s1 , λ), (s2 , λ), ..., (sn , λ), then

n n
!
X X
X1 + X2 + ... + Xn = Xi ∼ Gamma si , λ
i=1 i=1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 295

Sum of Independent Normal Random Variables

Proposition 3.2 If Xi , i
= 1 to n, are independent random variables that
are normally distributed with respective parameters ui and σi2 , i = 1 to n. Then

n n n
!
X X X
X1 + X2 + ... + Xn = Xi ∼ N µi , σi2
i=1 i=1 i=1

————————

It suffices to show

σ12 σ22

X 1 + X 2 ∼ N µ1 + µ2 , +
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 296

X ∼ N (µx , σx2 ), Y ∼ N (µy , σy2 ). Let Z = X + Y .

Z ∞
fZ (z) = fX (z − y)fY (y)dy
−∞
Z ∞ (z−y−µx )2
" (y−µy )2
#
1 − 2 1 − 2σ2
= √ e 2σx
√ e y dy
−∞ 2πσx 2πσy
Z ∞ 2 2
− (z−y−µ x ) − (y−µy )
1 1 2 2
=√ √ e 2σx 2σy
dy
2π −∞ 2πσx σy
Z ∞ (z−y−µx )2 σy 2 +(y−µ )2 σ 2
y x
1 1 − 2 2
=√ √ e 2σx σy
dy
2π −∞ 2πσx σy
Z ∞ y 2 (σx
2 +σ 2 )−2y(µ σ 2 −(µ −z)σ 2 )+(µ −z)2 σ 2 +µ2 σ 2
y y x x y x y y x
1 1 − 2 σ2
=√ √ e 2σx y dy
2π −∞ 2πσx σy
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 297

Let A= σx σy , B = σx2 σy2 , C = σx2 + σy2 ,


D = −2(µy σx2 − (µx − z)σy2 ),
E = (µx − z)2 σy2 + µ2y σx2 .

Z ∞
fZ (z) = fX (z − y)fY (y)dy
−∞
∞ y 2 (σx
2 +σ 2 )−2y(µ σ 2 −(µ −z)σ 2 )+(µ −z)2 σ 2 +µ2 σ 2
y y x x y x y y x
1 1
Z
− 2 2
=√ √ e 2σx σy
dy
2π −∞ 2πσx σy


r
1 B D2 −4EC
Z 2
− Cy +Dy+E
= √ e 2B dy = 2
e 8BC
−∞ 2πA CA

R∞ (y−µ) 2
1 −
because −∞ √
2πσ
e 2σ 2 dy = 1, for any µ, σ 2 .
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 298


1
Z 2
− Cy +Dy+E
√ e 2B dy
−∞ 2πA
∞ y 2 + D y+ E
1
Z C C

2B
= √ e C dy
−∞ 2πA
2 2
∞ ( y+ D ) − D 2 + E
2C C
1
Z 4C
− B
= √ e 2
C dy
−∞ 2πA
2 2
− D 2 +E Z
− 4C B C ∞ ( y+ D )
2C
1 −
2B
=e 2
C
√ q e C dy
−∞ B √AB
2π C
C
2
− D 2 +E r r
C
− 4C
2 B B B D2 −4EC
=e C = e 8BC
CA2 CA2
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 299

r
1 B D2 −4EC
fZ (z) = √ 2
e 8BC
2π CA
2 −(µ −z)σ 2 )2 −((µ −z)2 σ 2 +µ2 σ 2 )(σ 2 +σ 2 )
(µy σx x y x y y x x y
1 2 2 2 2
2σx σy (σx +σy )
=√ q e
2π σx2 + σy2
2
( x y) z−µ −µ
1 − 2(σ2 +σ2 )
=√ q e x y

2π σx2 + σy2

σx2 σy2 .

Therefore, Z = X + Y ∼ N µx + µy , +
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 300

Sum of Independent Poisson Random Variables

If X1 , X2 , ..., Xn are independent Poisson random variables with respective


parameters λi , i = 1 to n, then
n n
!
X X
X1 + X2 + ... + Xn = Xi ∼ P oisson λi
i=1 i=1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 301

Sum of Independent Binomial Random Variables

If X1 , X2 , ..., Xn are independent binomial random variables with respective


parameters (mi , p), i = 1 to n, then
n n
!
X X
X1 + X2 + ... + Xn = Xi ∼ Binomial mi , p
i=1 i=1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 302

Sections 6.4, 6.5: Conditional Distributions

P (EF )
Recall P (E|F ) = P (F ) .

If X and Y are discrete random variables, then

p(x, y)
pX|Y (x|y) =
pY (y)

If X and Y are continuous random variables, then

f (x, y)
fX|Y (x|y) =
fY (y)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 303

Example 4b: Suppose X and Y are independent Poisson random variables


with respective parameters λx and λy .

Q: Calculate the conditional distribution of X , given that X + Y = n.

—————–

What is the distribution of X + Y ??


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 304

Example 4b: Suppose X and Y are independent Poisson random variables


with respective parameters λx and λy .

Q: Calculate the conditional distribution of X , given that X + Y = n.


Solution:
P {X = k, X + Y = n}
P {X = k|X + Y = n} =
P {X + Y = n}
P {X = k, Y = n − k} P {X = k}P {Y = n − k}
= =
P {X + Y = n} P {X + Y = n}
 −λx k  " −λy n−k # 
e λ

e λx y n!
=
k! (n − k)! e−λx −λy (λx + λy )n
  k  n−k
n λx λx
= 1−
k λx + λy λx + λy
 
λx
Therefore, X ∼ Binomial n, p = λ +λ .
x y
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 305

Example 5b: The joint density of X and Y is given by



e−x/y e−y

y 0 < x, y < ∞
f (x, y) =
 0 otherwise

Q: Find P {X > 1|Y = y}.

————————–
R∞
P {X > 1|Y = y} = 1
fX|Y (x|y)dx
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 306

Solution:
e−x/y e−y
f (x, y) y e−x/y 1
fX|Y (x|y) = = R ∞ e−x/y e−y = R ∞ −x/y = e−x/y
fY (y) dx 0
e dx y
0 y

∞ ∞
1 −x/y
Z Z
P {X > 1|Y = y} = fX|Y (x|y)dx = e dx = e−1/y
1 1 y
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 307

The Bivariate Normal Distribution

The random variables X and Y have a bivariate normal distribution if, for
constants, ux , uy , σx , σy , −1 < ρ < 1, their joint density function is given, for
all −∞ < x, y < ∞, by

 
(x−µx )2 (y−µy )2 (x−µx )(y−µy )
1 1
− 2(1−ρ 2) 2
σx
+ 2
σy
−2ρ σx σy
f (x, y) = p e
2πσx σy 1 − ρ2

If X and Y are independent, then ρ = 0, and


 
(x−µx )2 (y−µy )2
1 − 12 2
σx
+ 2
σy
f (x, y) = e
2πσx
(x−µ )2 (y−µy )2
1 − 2σ2x 1 − 2σ2
= √ e x × √ e y

2πσx 2πσy
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 308

Z ∞
fX (x) = f (x, y)dy
−∞
 
∞ (x−µx )2 (y−µy )2 (x−µx )(y−µy )
1 1
Z
− 2(1−ρ 2) 2
σx
+ 2
σy
−2ρ σx σy
= p e dy
−∞ 2πσx σy 1 − ρ2
 
∞ (x−µx )2 (x−µx )t
1 1
+t2 −2ρ
Z
− 2(1−ρ 2) 2
σx σx
= p e dt
−∞ 2πσx 1 − ρ2
∞ (x−µx ) 2 +t2 σ 2 −2ρ(x−µ )tσ
1
Z x x x
− 2(1−ρ2 )σx2
= p e dt
2πσx 1 − ρ2
−∞

r 2
1 B D2 −4EC 1
Z 2
− Ct +Dt+E − (x−µx)
2
= √ e 2B dt = 2
e 8BC = √ e 2σx

−∞ 2πA CA 2πσx

√ p
A = 2π 1 − ρ2 σx , B = (1 − ρ2 )σx2 , C = σx2 ,
D = −2ρ(x − µx )σx , E = (x − µx )2
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 309

Therefore,

X ∼ N (µx , σx2 ), Y ∼ N (µy , σy2 )

The conditional density

f (x, y)
fX|Y (x|y) =
fY (y)

A useful trick: Only worry about the terms that are functions of x; the rest
terms are just normalizing constants.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 310

Any (reasonable) non-negative function g(x) can be essentially viewed as a


probability density function because one can always normalize it.

g(x) 1
f (x) = R ∞ = Cg(x) ∝ g(x), where C = R∞
−∞
g(x)dx −∞
g(x)dx
R∞
is a probability density function. (One can verify −∞ f (x) = 1).

C is just a (normalizing) constant.


When y is treated as a constant, C(y) is still a constant.

X|Y = y means y is a constant.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 311

(x−1)2

For example, g(x) = e 2 is essentially the density function of N (1, 1).

2 −2x 2 −2xy
−x −x
Similarly, g(x) =e 2 , g(x) = e 2 are both (essentially) the
density functions of normals.

————–

1
Z ∞
−x
2 −2xy √ y2
= e 2 dx = 2πe 2
C −∞

Therefore,
1 − y2 1 − (x−y)2
f (x) = Cg(x) = √ e 2 × g(x) = √ e 2
2π 2π
is the density function of N (y, 1), where y is merely a constant.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 312

(X , Y ) is a bivariate normal.

f (x, y)
fX|Y (x|y) = ∝ f (x, y)
fY (y)
 
1 (x−µx )2 (x−µx )(y−µy )
− 2(1−ρ 2) 2
σx
−2ρ σx σy
∝e
σ
(x−µx )2 −2ρ(x−µx )(y−µy ) x
σy
− 2 2
∝e 2(1−ρ )σx

 2
σ
x−µx −ρ(y−µy ) x
σy
− 2 2
∝e 2(1−ρ )σx

Therefore,
 
σx
X|Y ∼ N µx + ρ(y − µy ) , (1 − ρ2 )σx2
σy

 
σy
Y |X ∼ N µy + ρ(x − µx ) , (1 − ρ2 )σy2
σx
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 313

Example 5d: Consider n + m trials having a common probability of


success. Suppose, however, that this success probability is not fixed in advance
but is chosen from U (0, 1).

Q: What is the conditional distribution of this success probability given that the
n + m trials result in n successes?

Solution:

Let X = trial success probability. X ∼ U (0, 1).


Let N = total number of successes. N |X = x ∼ Binomial(n + m, x).
The desired probability: X|N = n.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 314

Solution:

Let X = trial success probability. X ∼ U (0, 1).


Let N = total number of successes. N |X = x ∼ Binomial(n + m, x).

P {N = n|X = x}fX (x)


fX|N (x|n) =
P {N = n}
n+m n m

x (1 − x)
= n
P {N = n}
∝ xn (1 − x)m

Therefore, X|N ∼ Beta(n + 1, m + 1).

Here X ∼ U (0, 1) is the prior distribution.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 315

If X|N ∼ Beta(n + 1, m + 1), then


n+1
E(X|N ) =
(n + 1) + (m + 1)

Suppose we do not have a prior knowledge of the success probability X .


We observe n successes out of n + m trials.

The most intuitive guess (estimate) of X should be


n
X̂ =
n+m

Assuming a uniform prior on X leads to the add-one smoothing.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 316

Section 6.6: Order Statistics

Let X1 , X2 , ..., Xn be n independent and identically distributed, continuous


random variables having a common density f and distribution function F .

Define

X(1) = smallest of X1 , X2 , ..., Xn


X(2) = second smallest of X1 , X2 , ..., Xn
...
X(j) = j th smallest of X1 , X2 , ..., Xn
...
X(n) = largest of X1 , X2 , ..., Xn

The ordered values X(1) ≤ X(2) ≤ ... ≤ X(n) are known as the order
statistics corresponding to the random variables X1 , X2 , ..., Xn .
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 317

For example, the CDF of the largest (i.e., X(n) ) should be

FX(n) (x) = P (X(n) ≤ x) = P (X1 ≤ x, X2 ≤ x, ..., Xn ≤ x)

because, if the largest is smaller than x, then everyone is smaller than x.

Then, by independence

FX(n) (x) =P (X1 ≤ x, X2 ≤ x, ..., Xn ≤ x)


=P (X1 ≤ x)P (X2 ≤ x)...P (Xn ≤ x)
=F (x)n ,

and the density function is simply

∂F n (x)
fX(n) (x) = = nF n−1 (x)f (x)
∂x
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 318

The CDF of the smallest (i.e., X(1) ) should be

FX(1) (x) = P (X(1) ≤ x) =1 − P (X(1) > x)


=1 − P (X1 > x, X2 > x, ..., Xn > x)

because, if the smallest is larger than x, then everyone is larger than x.

Then, by independence

FX(1) (x) =1 − P (X1 > x)P (X2 > x)...P (Xn > x)
=1 − [1 − F (x)]n

and the density function is simply


n
∂1 − [1 − F (x)] n−1
fX(1) (x) = = n [1 − F (x)] f (x)
∂x
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 319

Joint density function of order statistics

for x1 ≤ x2 ≤ ... ≤ xn ,

fX(1) ,X(2) ,...,X(n) (x1 , x2 , ..., xn ) = n!f (x1 )f (x2 )...f (xn ),

Intuitively (and heuristically)

P {X(1) = x1 , X(2) = x2 }
=P {{X1 = x1 , X2 = x2 } OR {X2 = x1 , X1 = x2 }}
=P {X1 = x1 , X2 = x2 } + P {X2 = x1 , X1 = x2 }
=f (x1 )f (x2 ) + f (x1 )f (x2 )
=2!f (x1 )f (x2 )

Warning, this is a heuristic argument, not a rigorous proof. Please read the book.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 320

Density function of the k th order statistics, X(k) : fX(k) (x).

• Select 1 from the n values X1 , X2 , ..., Xn . Let it be equal to x.

• Select k − 1 from the remaining n − 1 values. Let them be smaller than x.

• Let the rest be larger than x.

• Total number of choices is ??.

• For a given choice, the probability should be ??


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 321

• Select 1 from the n values X1 , X2 , ..., Xn . Let it be equal to x.


• Select k − 1 from the remaining n − 1 values. Let them be smaller than x.
• Let the rest be larger than x.
n

• Total number of choices is 1,k−1,n−k .

• For a given choice, the probability should be


k−1 n−k
f (x) [F (x)] [1 − F (x)] .

Therefore,
 
n k−1 n−k
fX(k) (x) = f (x) [F (x)] [1 − F (x)]
1, k − 1, n − k
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 322

Cumulative function of the k th order statistics

• Via integrating the density function:

Z y
FX(k) (y) = fX(k) (x)
−∞
Z y  
n k−1 n−k
= f (x) [F (x)] [1 − F (x)] dx
−∞ 1, k − 1, n − k

 Z y
n k−1 n−k
= f (x) [F (x)] [1 − F (x)] dx
1, k − 1, n − k −∞

 Z F (y)
n
= [z]k−1 [1 − z]n−k dz
1, k − 1, n − k 0
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 323

• Via a direct argument.

FX(k) (y) =P {X(k) ≤ (y)} = P {≥ k of the Xi ’s are ≤ y}

FX(k) (y) =P {X(k) ≤ y}


=P {≥ k of the Xi ’s are ≤ y}

n
X
= P {j of the Xi ’s are ≤ y}
j=k

n  
X n j n−j
= [F (y)] [1 − F (y)]
j
j=k
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 324

• A more intuitive solution by dividing y into non-overlapping segments.


[
{X(k) ≤ y} ={y ∈ [X(n) , ∞)} {y ∈ [X(n−1) , X(n) )}
[ [
... {y ∈ [X(k) , X(k+1) )}
n  
X n j n−j
= [F (y)] [1 − F (y)]
j
j=k

 
n
P {y ∈ [X(n) , ∞)} = [F (y)]n
n

 
n
P {y ∈ [X(n−1) , X(n) )} = [F (y)]n−1 [1 − F (y)]
n−1

 
n
P {y ∈ [X(k) , X(k+1) )} = [F (y)]k [1 − F (y)]n−k
k
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 325

An interesting identity by letting X ∼ U (0, 1):


n  
X n n−j
y j [1 − y]
j
j=k
 Z y
n n−k
= xk−1 [1 − x] dx, 0 < y < 1.
1, k − 1, n − k 0
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 326

Joint Probability Distribution of Functions of Random Variables

X1 and X2 are jointly continuous random variables


with probability density function fX1 ,X2 .

Y1 = g1 (X1 , X2 ), Y2 = g2 (X1 , X2 ).

The goal: Determine the density function fY1 ,Y2 .


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 327

The formula to be remembered

fY1 ,Y2 (y1 , y2 ) = fX1 ,X2 (x1 , x2 )|J(x1 , x2 )|−1

where, the inverse functions:

x1 = h1 (y1 , y2 ), x2 = h2 (y1 , y2 )

and the Jocobian:



∂g1 ∂g1
∂x1 ∂x2 = ∂g1 ∂g2 − ∂g2 ∂g1

J(x1 , x2 ) = ∂x1 ∂x2
∂g2 ∂g2 ∂x1 ∂x2
∂x1 ∂x2

Two conditions

• y1 = g1 (x1 , x2 ) and y2 = g2 (x1 , x2 ) can be uniquely solved:


x1 = h1 (y1 , y2 ), and x2 = h2 (y1 , y2 )
• The Jocobian J(x1 , x2 ) 6= 0 for all points (x1 , x2 ).
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 328

Example 7a: Let X1 and X2 be jointly continuous random variables with


probability density fX1 ,X2 . Let Y1 = X1 + X2 and Y2 = X1 − X2 .

Q: Find fY1 ,Y2 in terms of fX1 ,X2 .


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 329

Solution:

y1 = g1 (x1 , x2 ) = x1 + x2 , y2 = g2 (x1 , x2 ) = x1 − x2 ,
=⇒
x1 = h1 (y1 , y2 ) = y1 +y
2
2
, x2 = h2 (y1 , y2 ) = y1 −y2
2 ,


1 1


J(x1 , x2 ) = = −2;
1 −1

Therefore,

fY1 ,Y2 (y1 , y2 ) =fX1 ,X2 (x1 , x2 )|J(x1 , x2 )|−1


 
1 y1 + y2 y1 − y2
= fX1 ,X2 ,
2 2 2
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 330

Example 7b: The joint density in the polar coordinates.

x = h1 (r, θ) = r cos θ , y = h2 (r, θ) = r sin θ , i.e.,


θ = g2 (x, y) = tan−1 xy .
p
r = g1 (x, y) = x2 + y 2 ,

Q (a): Express fR,Θ (r, θ) in terms of fX,Y (x, y).

Q (b): What if X and Y are independent standard normal random variables?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 331

Solution: First, compute the Jocobian J(x, y):

∂g1 x
=p = cos θ
∂x 2
x +y 2

∂g1 y
=p = sin θ
∂y 2
x +y 2
 
∂g2 1 −y −y − sin θ
= = =
∂x 1 + (y/x)2 x2 x2 + y 2 r
 
∂g2 1 1 x cos θ
= 2
= 2 2
=
∂y 1 + (y/x) x x +y r

∂g1 ∂g2 ∂g1 ∂g2 cos2 θ sin2 θ 1


J= − = + =
∂x ∂y ∂y ∂x r r r
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 332

fR,Θ (r, θ) = fX,Y (x, y)|J|−1 = rfX,Y (r cos θ, r sin θ)

where 0 < r < ∞, 0 < θ < 2π .

If X and Y are independent standard normals, then


2
r − x2 +y2 re−r /2
fR,Θ (r, θ) = rfX,Y (r cos θ, r sin θ) = e 2 =
2π 2π

R and Θ are independent. Θ ∼ U (0, 2π).


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 333

Example 7d: X1 , X2 , and X3 , are independent standard normal random


variables. Let

Y1 = X1 + X2 + X3
Y2 = X1 − X2
Y3 = X1 − X3

Q: Compute the joint density function fY1 ,Y2 ,Y3 (y1 , y2 , y3 ).


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 334

Solution:

Compute the Jocobian



1 1 1


J = 1 −1 0 =3


1 0 −1

Solve for X1 , X2 , and X3

Y1 + Y2 + Y3 Y1 − 2Y2 + Y3 Y1 + Y2 − 2Y3
X1 = , X2 = , X3 =
3 3 3

fY1 ,Y2 ,Y3 (y1 , y2 , y3 ) = fX1 ,X2 ,X3 (x1 , x2 , x3 )|J|−1


2 2y 2 2y 2
 
y1 2y y
1 x2 +x2 +x2 1 − 12 + 32 + 33 − 23 3
− 1 22 3 3
= 3/2
e = 3/2
e
3(2π) 3(2π)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 335

More about Determinant


a b c


d
e f = aei + bf g + cdh − ceg − bdi − af h

g h i


x1 0 0 0 ... 0




× x2 0 0 ... 0


× × x3 0 ... 0
= x1 x2 x3 ...xn



× × × x4 ... 0



× × × × ... 0

× × × × × xn


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 336

Chapter 7: Properties of Expectation

If X is a discrete random variable with mass function p(x), then


X
E(X) = xp(x)
x

If X is a continuous random variable with density f (x), then


Z ∞
E(X) = xf (x)dx
−∞

If a ≤ X ≤ b, then a ≤ E(X) ≤ b.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 337

Proposition 2.1

If X and Y have a joint probability mass function p(x, y), then


XX
E[g(x, y)] = g(x, y)p(x, y)
y x

If X and Y have a joint probability density function f (x, y), then


Z ∞ Z ∞
E[g(x, y)] = g(x, y)f (x, y)dxdy
−∞ −∞

What if g(X, Y ) =X +Y?


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 338

Example 2a: An accident occurs at a point X that is uniformly distributed on


a road of length L. At the time of the accident an ambulance is at a location Y
that is also uniformly distributed on the road. Assuming that X and Y are
independent, find the expected distance between the ambulance and the point of
the accident.

Solution:

X ∼ U (0, L), Y ∼ U (0, L). To find E(|X − Y |).

Z Z
E(|X − Y |) = |x − y|f (x, y)dxdy
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 339

Solution:

X ∼ U (0, L), Y ∼ U (0, L). To find E(|X − Y |).


Z Z
E(|X − Y |) = |x − y|f (x, y)dxdy
L L
1
Z Z
= 2 |x − y|dxdy
L 0 0
LZ x L L
1 1
Z Z Z
= 2 (x − y) dydx + 2 (y − x) dydx
L 0 0 L 0 x
2 x
L Z L 2 L
1 y 1 y
Z

= 2 xy − dx + 2 − xy dx
L 0 2 0 L 0 2 x
L L 2
1 x2 1 L x2
Z Z
= 2 dx + 2 − xL + dx
L 0 2 L 0 2 2
L
=
3
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 340

Let X and Y be continuous random variables with density f (x, y).


g(X, Y ) = X + Y .

E[g(x, y)] = E(X + Y )


Z ∞Z ∞
= (x + y)f (x, y)dxdy
−∞ −∞
Z ∞ Z ∞ Z ∞ Z ∞
= xf (x, y)dxdy + yf (x, y)dxdy
−∞ −∞ −∞ −∞
Z ∞ Z ∞  Z ∞ Z ∞ 
= x f (x, y)dy dx + y f (x, y)dx dy
−∞ −∞ −∞ −∞
Z ∞ Z ∞
= xfX (x)dx + yfY (y)dy
−∞ −∞
=E(X) + E(Y )
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 341

Linearity of Expectations

E(X + Y ) = E(X) + E(Y )

E(X + Y + Z) = E((X + Y ) + Z)
=E(X + Y ) + E(Z) = E(X) + E(Y ) + E(Z)

E(X1 + X2 + ... + Xn ) = E(X1 ) + E(X2 ) + ... + E(Xn )

Linearity is not affected by correlations among Xi ’s.


No assumption of independence is needed.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 342

The Sample Mean

Let X1 , X2 , ..., Xn be identically distributed random variables having expected


value µ. The quantity X̄ , defined by
n
1X
X̄ = Xi
n i=1

is called the sample mean.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 343

n
" #
1 X
E(X̄) =E Xi
n i=1
" n #
1 X
= E Xi
n i=1
n
1X
= E[Xi ]
n i=1
n
1X
= µ
n i=1
1
= nµ
n

When µ is unknown, an unbiased estimator of µ is X̄ .


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 344

Expectation of a Binomial Random Variable

X ∼ Binomial(n, p)

X = Z1 + Z2 + ... + Zn

where Zi , i = 1 to n, are independent Bernoulli with probability of success p.

Because E(Zi ) = p,
n n
!
X X
E(X) = E Zi = E(Zi ) = np
i=1 i=1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 345

Expectation of a Hypergeometric Random Variable

Suppose n balls are randomly selected without replacement, from an urn


containing N balls of which m are white. Denote X ∼ HG(N, m, n).

Q: Find E(X).

Solution 1: Let X = X1 + X2 + ... + Xm , where



 1 if the ith white ball is selected
Xi =
 0 otherwise

Note that the Xi ’s are not independent; but we only need the expectation.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 346

Solution 1: Let X = X1 + X2 + ... + Xm , where



 1 if the ith white ball is selected
Xi =
 0 otherwise

E(Xi ) =P {Xi = 1} = P {ith white ball is selected}


1 N −1
 
1 n−1 n
= N
=
N

n

Therefore,
mn
E(X) = .
N
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 347

Solution 2: Let X = Y1 + Y2 + ... + Yn , where



 1 if the ith ball selected is white
Yi =
 0 otherwise

E(Yi ) =P {Yi = 1} = P {ith ball elected is white}


m
=
N
Therefore,
mn
E(X) = .
N
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 348

Expected Number of Matches

A group of N people throw their hats into the center of a room. The hats are
mixed up and each person randomly selects one.

Q: Find the expected number of people that selects their own hat.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 349

Solution: Let X denote the number of matches and let

X = X1 + X2 + ... + XN ,

where

 1 if the i th person selects his own hat
Xi =
 0 otherwise

1
E(Xi ) = P {Xi = 1} =
N
Thus
N
X 1
E(X) = E(Xi ) = N =1
i=1
N
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 350

Expectations May Be Undefined

A More Rigorous Definition: If X is a discrete random variable with mass


function p(x), then
X X
E(X) = xp(x), provided |x|p(x) < ∞
x x

If X is a continuous random variable with density f (x), then


Z ∞ Z ∞
E(X) = xf (x)dx, provided |x|f (x) < ∞
−∞ −∞
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 351

Example (Cauchy Distribution):

1 1
f (x) = , −∞ < x < ∞
π x2 + 1


x 1
Z
E(X) = dx = 0 ??? (Answer is NO)
−∞ π x2 + 1

because
∞ ∞
|x| 1 2x 1
Z Z
dx = dx
−∞ π x2 + 1 0 π x2 + 1

1 1
Z
2
= 2+1
d[x + 1]
0 π x
1 2

= log(x + 1) 0
π
1
= log ∞ = ∞
π
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 352

Section 7.3: Moments of The Number of Events


 1 if A occurs
i
Ii =
 0 otherwise
Pn
X= i=1 Ii : number of these events Ai that occur. We know
n
X n
X
E(X) = E(Ii ) = P (Ai )
i=1 i=1

Suppose we are interested in the number of pairs of events that occur, eg. Ai Aj
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 353

Pn
X= i=1 Ii : number of these events Ai that occur.

X(X−1) X

2 = 2 : number of pairs of event that occur.

⇐⇒
P
i<j Ii Ij : number of pairs of event that occur, because

 1 if both A and A occur
i j
Ii Ij =
 0 otherwise

Therefore,
 
X(X − 1) X
E = P (Ai Aj )
2 i<j

X
2

E X − E (X) = 2 P (Ai Aj )
i<j
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 354

Second Moments of a Binomial Random Variables

n independent trials, with each trial being a success with probability p.


Ai = the event that trial i is a success.
If i 6= j , then P (Ai Aj ) = p2 .

 
2
X n 2
p = n(n − 1)p2

E X − E (X) = 2 P (Ai Aj ) = 2
i<j
2

E(X 2 ) = n2 p2 − np2 + np

 2 2 2
 2
V ar(X) = n p − np + np − [np] = np(1 − p)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 355

Second Moments of a Hypergeometric Random Variables

n balls are randomly selected without replacement from an urn containing N


balls, of which m are white.

Ai = event that the ith ball selected is white.


X = number of white balls selected. We have known
n
X m
E(X) = P (Ai ) = n .
i=1
N
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 356

mm−1
P (Ai Aj ) = P (Ai )P (Aj |Ai ) = , i 6= j
N N −1

 
2
 X n mm−1
E X − E (X) = 2 P (Ai Aj ) = 2
i<j
2 N N −1

2 m m − 1 nm
E(X ) = n(n − 1) +
N N −1 N

nm(N − n)(N − m) m m N − n


V ar(X) = =n 1−
N 2 (N − 1) N N N −1

One can view m


N = p. Note that N −n
N −1 ≈ 1 if N is much larger than n.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 357

Second Moments of the Matching Problem

Let Ai = event that person i selects his/her own hat in the match problem, i =1
to N .
1 1
P (Ai Aj ) = P (Ai )P (Aj |Ai ) = , i 6= j
N N −1

X = number of people who select their own hat. We have shown E(X) = 1.

2
X 1
E(X ) − E(X) = 2 P (Ai Aj ) = N (N − 1) =1
i<j
N (N − 1)

V ar(X) = E(X 2 ) − E 2 (X) = 1


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 358

Section 7.4: Covariance and Correlations

Proposition 4.1: If X and Y are independent, then

E(XY ) = E(X)E(Y ).

For any function h and g ,

E[h(X) g(Y )] = E[h(X)] E[g(Y )].


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 359

Proof:
Z ∞ Z ∞
E[h(X) g(Y )] = g(x)h(y)f (x, y)dxdy
−∞ −∞
Z ∞ Z ∞
= g(x)h(y)fX (x)fY (y)dxdy
−∞ −∞
Z ∞  Z ∞ 
= g(x)fX (x)dx h(y)fY (y)dy
−∞ −∞
=E[h(X)] E[g(Y )]
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 360

Covariance

Definition: The covariance between X and Y , is defined by

Cov(X, Y ) = E[(X − E[X])(Y − E[Y ])]

Equivalently

Cov(X, Y ) = E(XY ) − E(X)E(Y )


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 361

Proof:

Cov(X, Y ) = E[(X − E[X])(Y − E[Y ])]


= E[XY − E[X]Y − E[Y ]X + E[X]E[Y ]]
= E[XY ] − E[E[X]Y ] − E[E[Y ]X] + E[E[X]E[Y ]]
= E[XY ] − E[X]E[Y ] − E[Y ]E[X] + E[X]E[Y ]
= E[XY ] − E[X]E[Y ]

Note that E(X) is a constant, so is E(X)E(Y )


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 362

X and Y are independent =⇒ Cov(X, Y ) = E(XY ) − E(X)E(Y ) = 0.

However, that X and Y are uncorrelated Cov(X, Y ) = 0,


does not imply that X and Y are independent
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 363

Properties of Covariance: Proposition 4.2

• Cov(X, Y ) = Cov(Y, X)
• Cov(X, X) = V ar(X)
• Cov(aX, Y ) = aCov(X, Y )

Directly follow from the definition Cov(X, Y ) = E(XY ) − E(X)E(Y ).


 
n
X m
X n X
X m
Cov  Xi , Yj  = Cov(Xi , Yj )
i=1 j=1 i=1 j=1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 364

Proof:
     
n m n m n m
!
X X X X X X
Cov  Xi , Yj  =E  Xi Yj  − E Xi E Yj 
i=1 j=1 i=1 j=1 i=1 j=1
 
n X
X m n
X m
X
=E  Xi Yj  − E(Xi ) E(Yj )
i=1 j=1 i=1 j=1
n X
X m n X
X m
= E(Xi Yj ) − E(Xi )E(Yj )
i=1 j=1 i=1 j=1
n X
X m
= [E(Xi Yj ) − E(Xi )E(Yj )]
i=1 j=1
n X
X m
= Cov(Xi , Yj )
i=1 j=1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 365

 
n
X m
X n X
X m
Cov  Xi , Yj  = Cov(Xi , Yj )
i=1 j=1 i=1 j=1

If n = m and Xi = Yi , then
 
n n n n X
n
!
X X X X
V ar Xi =Cov  Xi , Xj  = Cov(Xi , Xj )
i=1 i=1 j=1 i=1 j=1
n
X X
= Cov(Xi , Xi ) + Cov(Xi , Xj )
i=1 i6=j
n
X X
= V ar(Xi ) + Cov(Xi , Xj )
i=1 i6=j
n
X X
= V ar(Xi ) + 2 Cov(Xi , Xj )
i=1 i<j
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 366

If Xi ’s are pairwise independent, then


n n
!
X X
V ar Xi = V ar(Xi )
i=1 i=1

Example 4b: X ∼ Binomial(n, p) can be written as


X = X1 + X2 + ... + Xn ,
where Xi ∼ Bernoulli(p). Xi ’s are independent. Therefore
n
!
X
V ar(X) =V ar Xi
i=1
n
X n
X
= V ar(Xi ) = p(1 − p) = np(1 − p).
i=1 i=1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 367

Example 4a: Let X1 , X2 , ..., Xn be independent and identically distributed


random variables having expected value µ and variance σ 2 . Let X̄ be the
sample mean and S 2 be the sample variance, where

n n
1X 2 1 X
X̄ = Xi , S = (Xi − X̄)2
n i=1 n − 1 i=1

Q: Find V ar(X̄) and E(S 2 ).


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 368

Solution:
n
!
1 X
V ar(X̄) =V ar Xi
n i=1
n
!
1 X
= 2 V ar Xi
n i=1
n
1 X
= 2 V ar (Xi )
n i=1
n
1 X 2 σ2
= 2 σ =
n i=1 n

We’ve already shown E(X̄) = µ. As the sample size n increases, the


variance of X̄ , the unbiased estimator of µ, decreases.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 369

The Less Clever Approach (different from the book)


n
!
1 X
E(S 2 ) =E (Xi − X̄)2
n−1 i=1
n
!
1 X
Xi2 2

= E + X̄ − 2X̄Xi
n−1 i=1
n n
!
1 X X
= E Xi2 + nX̄ 2 − 2X̄ Xi
n−1 i=1 i=1
n
!
1 X
= E Xi2 − nX̄ 2
n−1 i=1
n
!
1 2
X
2

= E(Xi ) − nE X̄
n − 1 i=1
n 2 2 2

= µ + σ − E X̄
n−1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 370

 !2   
n n
2 1 X 1 X
2
X
E(X̄ ) = 2 E  Xi  = 2E  Xi + 2 Xi Xj 
n i=1
n i i<j
 
n
1 X 2
X
= 2 E(Xi ) + 2 E(Xi Xj )
n i i<j
 
n
1 X 2 2
X
= 2 (µ + σ ) + 2 µ2 
n i i<j
2
1  σ
= 2 n µ2 + σ 2 + n(n − 1)µ2 = + µ2
 
n n
Or, directly,

2 2 σ2
+ µ2
 
E(X̄ ) =V ar X̄ + E X̄ =
n
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 371

Therefore,
n
2 2 2 2

E(S ) = µ + σ − E X̄
n−1
2
 
n σ
= µ2 + σ 2 − − µ2
n−1 n
=σ 2

2 1
Pn
Sample variance S = n−1 i=1 (Xi − X̄)2 is an unbiased estimator of σ 2 .

1 1
Why n−1 instead of n ?

Pn 2
i=1 (X i − X̄) is a sum of n terms with only n − 1 degrees of freedom,
1
P n
because X̄ = n i=1 Xi .
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 372

The Standard Clever Technique for evaluating E(S 2 ) (as in the textbook)

n
X n
X
(n − 1)S 2 = (Xi − X̄)2 = (Xi −µ + µ − X̄)2
i=1 i=1
Xn n
X n
X
= (Xi − µ)2 + (X̄ − µ)2 − 2 (Xi − µ)(X̄ − µ)
i=1 i=1 i=1
Xn
= (Xi − µ)2 − n(X̄ − µ)2
i=1

n
" #
X
E[(n − 1)S 2 ] =(n − 1)E(S 2 ) = E (Xi − µ)2 − n(X̄ − µ)2
i=1
σ2 2
=nV ar(Xi ) − nV ar(X̄) = nσ − n = (n − 1)σ 2
n
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 373

The Correlation of Two Random Variables

Definition: The correlation of two random variables X and Y , denoted by


ρ(X, Y ), is defined, as long as V ar(X) > 0 and V ar(Y ) > 0, by

Cov(X, Y )
ρ(X, Y ) = p
V ar(X)V ar(Y )

It can be shown that

−1 ≤ ρ(X, Y ) ≤ 1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 374

Cov(X, Y )
−1 ≤ ρ(X, Y ) = p ≤1
V ar(X)V ar(Y )

Proof: The basic trick: Variance V ar(X) ≥ 0 always.

!
X Y
V ar p +p
V ar(X) V ar(Y )
V ar(X) V ar(Y ) 2Cov(X, Y )
= + +p
V ar(X) V ar(Y ) V ar(X)V ar(Y )
=2 (1 + ρ(X, Y )) ≥ 0

Therefore, ρ(X, Y ) ≥ −1.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 375

!
X Y
V ar p −p
V ar(X) V ar(Y )
V ar(X) V ar(Y ) 2Cov(X, Y )
= + − p
V ar(X) V ar(Y ) V ar(X)V ar(Y )
=2 (1 − ρ(X, Y )) ≥ 0

Therefore, ρ(X, Y ) ≤ 1.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 376

• ρ(X, Y ) = +1 =⇒ Y = a + bX , and b > 0.

• ρ(X, Y ) = −1 =⇒ Y = a − bX , and b > 0.

• ρ(X, Y ) = 0 =⇒ X and Y are uncorrelated

• ρ(X, Y ) is close to +1 =⇒ X and Y are highly positively correlated.

• ρ(X, Y ) is close to −1 =⇒ X and Y are highly negatively correlated.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 377

Section 7.5: Conditional Expectation

If X and Y are jointly discrete random variables


X
E[X|Y = y] = xP {X = x|Y = y}
x
X
= xpX|Y (x|y)
x

If X and Y are jointly continuous random variables


Z ∞
E[X|Y = y] = xfX|Y (x|y)dx
−∞
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 378

Example 5b: Suppose the joint density of X and Y is given by

1 −x/y −y
f (x, y) = e e , 0 < x, y < ∞
y

Q: Compute
Z ∞
E[X|Y = y] = xfX|Y (x|y)dx
−∞

What the answer should be like? (a function a y )


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 379

Solution:
f (x, y) f (x, y)
fX|Y (x|y) = = R∞
fY (y) −∞
f (x, y)dx
1 −x/y −y
ye e
= R ∞ 1 −x/y −y
0 y
e e dx
1 −x/y −y
ye e
= 0
e−x/y −y
e ∞
1 −x/y −y
ye e
=
e−y
1
= e−x/y , y>0
y

∞ ∞
x −x/y
Z Z
E[X|Y = y] = xfX|Y (x|y)dx = e dx = y
−∞ 0 y
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 380

Computing Expectations by Conditioning

Proposition 5.1: The tower property

E(X) = E[E[X|Y ]]

In some cases, computing E(X) is difficult;


but computing E(X|Y ) may be much more convenient.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 381

Proof: Assume X and Y are continuous. (Book assumes discrete variables)


Z ∞
E[E[X|Y ]] = E(X|Y )fY (y)dy
−∞
Z ∞ Z ∞ 
= xfX|Y (x|y)dx fY (y)dy
−∞ −∞
Z ∞Z ∞
= xfX|Y (x|y)fY (y)dxdy
−∞ −∞
Z ∞Z ∞
 
= x fX|Y (x|y)fY (y) dxdy
−∞ −∞
Z ∞Z ∞
= xf (x, y)dxdy
−∞ −∞
Z ∞ Z ∞ 
= x f (x, y)dy dx
−∞ −∞
Z ∞
= xfX (x)dx = E(X)
−∞
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 382

Example 5d: Suppose N the number of people entering a store on a given


day is a random variable with mean 50. Suppose further that the amounts of
money spent by these customers are independent random variables having a
common mean $8. Assume the amount of money spent by a customer is also
independent of the total number of customers to enter the store.

Q: What is the expected amount of money spent in the store on a given day?

Solution:

N : the number of people entering the store. E(N ) = 50.


Xi : the amount spent by the ith customer. E(Xi ) = 8.
X : the total amount of money spent in the store on a give day. E(X) =??
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 383

We can not use the linearity directly:


N N N
!
X X X
E(X) = E Xi = E(Xi ) = 8 ???
i=1 i=1 i=1

because N , the number of terms in the summation, is also random.

N
!
X
E(X) =E Xi
i=1
N
" !#
X
=E E Xi |N
i=1
N
" #
X
=E E(Xi )
i=1
=E(N 8) = 8E(N )
=8 × 50 = $400
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 384

Example 2e: A game is begun by rolling an ordinary pair of dice.

• If the sum of the dice is 2, 3, or 12, the player loses.


• If the sum is 7 or 11, the player wins.
• If the sum is any other numbers i,the player continues to roll the dice until the
sum is either 7 or i.

– If it is 7, the player loses;

– If it is i, the player wins;

—————————–

Q : Compute E(R), where R= number of rolls in a game.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 385

Conditional on S , the initial sum,


12
X
E(R) = E(R|S = i)P (S = i)
i=2

Let P (S = i) = Pi .
1 2 3
P2 = , P3 = , P4 = ,
36 36 36
4 5 6
P5 = , P6 = , P7 = ,
36 36 36
5 4 3
P8 = , P9 = , P10 = ,
36 36 36
2 1
P11 = , P12 =
36 36
Equivalently,
i−1
Pi = P14−i = , i = 2, 3, ..., 7
36
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 386

Obviously,

 1, if i = 2, 3, 7, 11, 12
E(R|S = i) =
 1 + E(Z), otherwise

Z = the number of rolls until the sum is either i or 7.


Note that i ∈
/ {2, 3, 7, 11, 12} at this point.

Z is a geometric random variable with success probability = Pi + P7 .


1
Thus E(Z) = P +P .
i 7

Therefore,
6 10
X Pi X Pi
E(R) =1 + + = 3.376
i=4
Pi + P7 i=8 Pi + P7
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 387

Computing Probabilities by Conditioning

If Y is discrete, then
X
P (E) = P (E|Y = y)P (Y = y)
y

If Y is continuous, then
Z ∞
P (E) = P (E|Y = y)fY (y)dy
−∞
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 388

The Best Prize Problem: Deal? No Deal!

We are presented with n distinct prizes in sequence. After being presented with a
prize we must decide whether to accept it or to reject it and consider the next
prize. The objective is to maximize the probability of obtaining the best prize.
Assuming that all n! orderings are equally likely.

Q: How well can we do? What would be a good strategy?

—————————–

Many related applications: online algorithms, hiring decisions, dating, etc.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 389

A plausible strategy: 0 ≤ k ≤ n. Reject the first k prizes


Fix a number k ,
and then accepts the first one that is better than all of those first k .

Let X = the location of the best prize, i.e., 1 ≤ X ≤ n.


Let Pk (best) = the best prize is selected using this strategy.

n
X
Pk (best) = Pk (best|X = i)P (X = i)
i=1
n
1 X
= Pk (best|X = i)
n i=1

Obviously

Pk (best|X = i) = 0, if i ≤k
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 390

Assume i > k , then

Pk (best|X = i) =Pk (best of the first i − 1 is among the first k|X = i)


k
=
i−1
Therefore,
n
1X
Pk (best) = Pk (best|X = i)
n i=1
n
k X 1
=
n i−1
i=k+1

Next question: which k to use? We choose the k that maximizes Pk (best).

Exhaustive search is the best thing to do, which only involves a linear scan.

However, it is often useful to have an “analytical” (albeit approximate) expression.


Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 391

A useful approximate formula


n
X 1 1 1 1
= 1 + + + ... + ≈ log n + γ
i=1
i 2 3 n

where Euler’s constant


γ = 0.57721566490153286060651209008240243104215933593992...
The approximation is asymptotically (as n → ∞) exact.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 392

n
k X 1
Pk (best) =
n i−1
i=k+1
 
k 1 1 1
= + + ... +
n k k+1 n−1
n−1 k−1
!
k X1 X1
= −
n i=1 i i=1
i
k
≈ (log(n − 1) − log(k − 1))
n
k n−1
= log
n k−1
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 393

k n−1 k n
Pk (best) ≈ log ≈ log
n k−1 n k

′ 1 n k1 n n
[Pk (best)] ≈ log − = 0 =⇒ log = 1 =⇒ k =
n k nk k e
n n1
Also, when k = e, Pk (best) ≈ ne ≈ 0.36788, as n → ∞.
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 394

Conditional Variance

Proposition 5.2:

V ar(X) = E[V ar(X|Y )] + V ar(E[X|Y ])

Proof: (from the opposite direction, compared to the book)

E[V ar(X|Y )] + V ar(E[X|Y ])


2 2 2 2
 
=E E(X |Y ) − E (X|Y ) + E(E (X|Y )) − E (E[X|Y ])
2 2 2 2
 
= E(E(X |Y )) − E (E[X|Y ]) + −E(E (X|Y )) + E(E (X|Y ))
=E(X 2 ) − E 2 (X)
=V ar(X)
Cornell University, BTRY 4080 / STSCI 4080 Fall 2009 Instructor: Ping Li 395

Example 5o: Let X1 , X2 , ..., XN be a sequence of independent and


identically distributed random variables and let N be a non-negative
integer-valued random variable independent of Xi .

P 
N
Q: Compute V ar i=1 Xi .

Solution:
N N N
! " !# " #!
X X X
V ar Xi =E V ar Xi |N + V ar E Xi |N
i=1 i=1 i=1
=E (V ar(X)N ) + V ar(N E(X))
=V ar(X)E(N ) + E 2 (X)V ar(N )

You might also like